WO2022236876A1 - Cellophane defect recognition method, system and apparatus, and storage medium - Google Patents

Cellophane defect recognition method, system and apparatus, and storage medium Download PDF

Info

Publication number
WO2022236876A1
WO2022236876A1 PCT/CN2021/095962 CN2021095962W WO2022236876A1 WO 2022236876 A1 WO2022236876 A1 WO 2022236876A1 CN 2021095962 W CN2021095962 W CN 2021095962W WO 2022236876 A1 WO2022236876 A1 WO 2022236876A1
Authority
WO
WIPO (PCT)
Prior art keywords
cellophane
defect
semantic segmentation
network
set data
Prior art date
Application number
PCT/CN2021/095962
Other languages
French (fr)
Chinese (zh)
Inventor
刘宇迅
田丰
罗立浩
陈小旋
黄建
Original Assignee
广州广电运通金融电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州广电运通金融电子股份有限公司 filed Critical 广州广电运通金融电子股份有限公司
Publication of WO2022236876A1 publication Critical patent/WO2022236876A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the invention relates to the technical field of cellophane detection, in particular to a cellophane defect identification method, system, device and storage medium.
  • cellophane is an important part of promoting economic development.
  • the main method for detecting defects of cellophane is manual visual inspection. It is easy to miss subtle defects in the manual inspection process, and the production speed of cellophane is relatively fast. Using naked eyes The speed of inspection can never keep up with the production speed of cellophane, which makes the detection efficiency very slow and seriously delays the production efficiency of cellophane.
  • one of the objects of the present invention is to provide a cellophane defect identification method, which improves the efficiency and accuracy of cellophane defect detection.
  • the second object of the present invention is to provide a cellophane defect identification system that implements the above method.
  • the third object of the present invention is to provide a cellophane defect identification device that implements the above method.
  • the fourth object of the present invention is to provide a storage medium for performing the above method.
  • a cellophane defect identification method comprising:
  • Step S1 collect cellophane surface images, and establish test set data
  • Step S2 Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
  • Step S3 Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
  • construction method of the semantic segmentation network includes:
  • the acquisition method of the tag data is:
  • Import the surface image of the cellophane select the corresponding defect area according to the requirement, and assign a label to the selected area to obtain the label data.
  • the UNET network uses the group convolution module to group the input feature map, and then uses the convolution module to convolve each group respectively; one of the convolution modules uses 1*1, 1*3 and 3*1 combined convolution.
  • the loss function is composed of a cross-entropy loss function and a dice loss measurement function.
  • the post-processing method is:
  • step S1 an industrial camera is used to photograph the surface of the cellophane to obtain an image of the cellophane surface.
  • a cellophane defect identification system based on the UNET network including:
  • the collection module is used to collect cellophane surface images and establish test set data
  • the model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation
  • the post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
  • a cellophane defect identification device characterized in that it comprises:
  • a memory for storing the program
  • the processor is configured to load the program to execute the method for identifying cellophane defects as described above.
  • a storage medium stores a program, and when the program is executed by a processor, the above cellophane defect identification method is implemented.
  • the present invention uses the traditional image processing method combined with the semantic segmentation network as the basis for automatic classification, which effectively improves the robustness of cellophane defect recognition compared to relying solely on traditional image processing and analysis; at the same time, the semantic segmentation model uses the existing
  • the optimized UNET network can quickly detect cellophane defects, reduce the time required for actual detection and analysis, improve the efficiency and accuracy of cellophane defect detection, and reduce detection costs.
  • Fig. 1 is the schematic flow chart of cellophane defect identification method of the present invention
  • Fig. 2 is the overall flowchart of the testing and training steps of cellophane defect recognition method of the present invention
  • Fig. 3 is the structural diagram of the UNET model of the present invention.
  • Fig. 4 is a schematic diagram of group convolution in the present invention.
  • Fig. 5 is a schematic diagram of 1x1, 3x1, 1x3 combined convolution of the present invention.
  • Fig. 6 is a diagram of the defect prediction result of a solid-color paper roll according to the present invention.
  • Fig. 7 is a diagram of the defect prediction results of colored paper rolls of the present invention.
  • Fig. 8 is a schematic block diagram of modules of the cellophane defect identification system of the present invention.
  • This embodiment provides a cellophane defect identification method, which can replace the existing manual detection and improve the efficiency and accuracy of cellophane defect detection.
  • the cellophane defect identification method of this embodiment specifically includes the following steps:
  • Step S1 collect cellophane surface images, and establish test set data
  • Step S2 Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
  • Step S3 Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
  • an industrial camera is used to photograph the surface of cellophane after production and manufacture, thereby obtaining an image of the surface of cellophane, and establishing a data set of related original images, wherein the data set is divided into training set data and test set data, and the training set is photographed Several photographed images obtained on the surface of cellophane, use the training set data as the training basis of the semantic segmentation network model, thereby constructing a complete semantic segmentation network model; The goal is to import the test set into the established semantic segmentation network model, detect whether there are defects in the pictures in the test set, and obtain the cellophane defect recognition result.
  • the lightweight UNET network is optimized on the basis of the existing UNET network;
  • the network structure of the existing UNET network includes two parts, an encoder and a decoder, wherein the encoder performs a down-sampling process, and The decoder performs the upsampling process;
  • the downsampling part consists of multiple Down Blocks, which function as feature extractors.
  • Each Down Block contains two convolutional and pooling layers, and the Max Pooling layer used by the pooling layer The pooling operation, every time a Down Block structure is passed, the spatial size of the feature map will be halved.
  • the upsampling part of the existing UNET network is composed of Up Block. In the Up Block structure, it contains two convolutions and one upsampling layer, which is used to restore the size of the feature map layer by layer.
  • Fig. 3 shows is the structural representation of the lightweight UNET network of the present embodiment, as shown in Fig. 3, the lightweight UNET network in the present embodiment is to improve on the structure of the existing UNET network, the present embodiment
  • each Down Block structure in the existing UNET network uses two 3*3 convolutions to complete the down-sampling process, and this embodiment improves the Down Block structure to A 3*3 convolution and a combination convolution, and adopts the group convolution method, and its combination convolution is composed of 1*1, 1*3 and 3*1, and this embodiment compresses the times of downsampling and upsampling, and then Reduce the amount of network parameters.
  • the cellophane image can be processed by the convolution kernel to obtain the feature map of the image, and the input feature map is grouped , and then each group is convolved separately. Assume that the size of the input feature map is still C*H*W, and the number of output feature maps is N.
  • the number of input feature maps for each group is C/G
  • the output feature map for each group is The number of maps is N/G
  • the size of each convolution kernel is C/G*K*K
  • the total number of convolution kernels is still N
  • the number of convolution kernels in each group is N/G
  • the convolution kernel is only related to The input map of the same group is convolved
  • the total parameter quantity of the convolution kernel is N*C/G*K*K, so it can be seen that the total parameter quantity is reduced to the original 1/G.
  • the existing 3*3 convolution is decomposed into a combined convolution of 1*1, 1*3 and 3*1, and the convolution parameters after decomposition are undecomposed
  • the previous 45% can greatly reduce the amount of network parameters, and carry out lightweight processing on the existing UNET network, thereby improving the efficiency of cellophane defect identification.
  • three more activation functions are used after decomposition, which can increase the linearization capability.
  • the training set data is imported into the optimized lightweight UNET network of the above structure, and the semantic segmentation network model can be constructed using the training set data.
  • the construction method of the semantic segmentation network of the present embodiment includes: importing the training set data into the optimized UNET network; after the images in the training set are processed by the lightweight UNET network, the semantically segmented image can be obtained, from the semantic All the features of the cellophane surface can be seen in the image obtained after segmentation, including the pattern on the cellophane and the defects on the cellophane surface; then, the output data and label data of the UNET network are calculated through the loss function to calculate the loss value, update the network parameters through backpropagation, if the model converges, that is, the loss value stabilizes to the lowest value, then test the model and save the latest network parameters for subsequent testing, otherwise, continue to input the training set into the network, The network parameters are not saved until the network converges, and finally a trained semantic segmentation network model is obtained.
  • the label data is obtained by segmenting and labeling the defect positions in the training set data, that is, when the training set data is obtained, the pre-imported cellophane defect image is calibrated for the defect area, and the selected area is assigned a label , repeat the above process until the labeling is completed, and save the corresponding label to obtain the label data.
  • the labelme tool can be used to perform image labeling, that is, import the cellophane image into the labelme tool, select a defective region in the cellophane image, and assign values to image pixels in the defective region to obtain label data.
  • the label data and the signal output by the lightweight UNET network are used to calculate the loss value through the loss function.
  • the loss function of this embodiment is composed of the cross entropy loss function It is combined with the dice loss metric function.
  • the cross-entropy is derived from the Kullback-Leibler (KL) divergence, which is a measure of the difference between two distributions.
  • KL divergence a measure of the difference between two distributions.
  • the distribution of data is given by the training set. Therefore minimizing the KL divergence is equivalent to minimizing the cross-entropy.
  • Cross entropy is defined as:
  • N is the number of samples, if label c is the correct classification for pixel i, then binary index; is the corresponding predicted probability.
  • the Dice loss metric function aims to minimize the area where the ground truth G and the predicted segmentation area S do not match, or to maximize the area where G and S overlap.
  • the trained model After obtaining the trained semantic segmentation network model through the above method, use the trained model to perform semantic segmentation on the cellophane surface image in the test set, and then use traditional methods to post-process the network output signal to determine whether the signal has defects.
  • the method of post-processing the network output signal is: set a fixed threshold to binarize the output signal, perform contour search and combine the contour size features to identify the defect position on the cellophane surface, thereby judging the cellophane Whether the surface image is flawed.
  • the method of this embodiment is used to identify defects on the surface of cellophane in different colors, no matter whether it is a pure-color paper roll or a colored paper roll, the position of the defect on the surface of the cellophane can be accurately identified. detection, the method improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost.
  • This embodiment provides a cellophane defect recognition system based on the UNET network, which implements the cellophane defect recognition method described in Embodiment 1; as shown in Figure 8, the recognition system of this embodiment specifically includes the following modules:
  • the collection module is used to collect cellophane surface images and establish test set data
  • the model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation
  • the post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
  • This embodiment optimizes the existing UNET network, which can reduce the amount of network parameters, thereby achieving the effect of a lightweight UNET network, thereby increasing the image processing speed, quickly detecting cellophane defects, and significantly reducing the time required for actual detection and analysis;
  • the semantic segmentation network is used to replace the original manual detection method, which eliminates the subjectivity of manual detection and manual analysis, improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost; in addition, this embodiment uses traditional image processing
  • the combination of the method and the semantic segmentation network is used as the basis for automatic classification. Compared with relying solely on traditional image processing and analysis, the accuracy of the detection method in this embodiment can be increased to 98%. Stickiness.
  • This embodiment discloses a cellophane defect identification device, including:
  • a memory for storing the program
  • the processor is configured to load the program to execute the cellophane defect identification method described in Embodiment 1.
  • This embodiment discloses a storage medium, which stores a program, and when the program is executed by a processor, the method for identifying cellophane defects is realized.
  • the device and storage medium in this embodiment are based on the two aspects of the same inventive concept as the method in the previous embodiment.
  • the implementation process of the method has been described in detail above, so those skilled in the art can understand clearly from the foregoing description To understand the structure and implementation process of the device in this implementation, for the sake of brevity of the description, details will not be repeated here.
  • the above-mentioned embodiment is only a preferred embodiment of the present invention, and cannot be used to limit the protection scope of the present invention. Any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention belong to the scope of the present invention. Scope of protection claimed.

Abstract

Disclosed in the present invention are a cellophane defect recognition method, system and apparatus, and a storage medium. The cellophane defect recognition method specifically comprises: step S1, collecting cellophane surface images to establish test set data; step S2, importing the test set data into a semantic segmentation network model based on an optimized UNET network for semantic segmentation; and step S3: acquiring an output signal of the semantic segmentation network model, and post-processing the output signal to obtain a defect recognition result. According to the present invention, a conventional image processing method and a semantic segmentation network are combined as the basis for automatic classification, thereby effectively improving the robustness of cellophane defect recognition compared with conventional image processing and analysis; moreover, the semantic segmentation model uses the optimized UNET network, such that a cellophane defect can be quickly detected, the time required for actual detection and analysis is reduced, the efficiency and accuracy of cellophane defect detection are improved, and detection costs are reduced.

Description

一种玻璃纸缺陷识别方法、系统、装置及存储介质A cellophane defect identification method, system, device and storage medium 技术领域technical field
本发明涉及玻璃纸检测技术领域,尤其涉及一种玻璃纸缺陷识别方法、系统、装置及存储介质。The invention relates to the technical field of cellophane detection, in particular to a cellophane defect identification method, system, device and storage medium.
背景技术Background technique
在玻璃纸作为日常生活及工业生产的重要组成部分,是推动经济发展的重要组成部分。而玻璃纸的生产过程中,玻璃纸表面会出现了不同形式缺陷,目的主要检测玻璃纸缺陷的方法是人工方式进行肉眼检查,人工检查过程中容易漏判细微缺陷,且玻璃纸的生产速度较快,利用肉眼检查的速度永远无法跟上玻璃纸生产速度,使得检测效率非常慢,严重拖延了玻璃纸的生产效率。As an important part of daily life and industrial production, cellophane is an important part of promoting economic development. In the production process of cellophane, there will be different forms of defects on the surface of cellophane. The main method for detecting defects of cellophane is manual visual inspection. It is easy to miss subtle defects in the manual inspection process, and the production speed of cellophane is relatively fast. Using naked eyes The speed of inspection can never keep up with the production speed of cellophane, which makes the detection efficiency very slow and seriously delays the production efficiency of cellophane.
因此,为了提高玻璃纸的检测效率,部分厂家会想到使用机器视觉检测代替人力来完成生产制造中的检测步骤;但是,由于玻璃纸具有一定的透光性,且具有不同的花纹和不同的颜色的特点,导致传统的图像缺陷识别算法无法有效针对不同花色纸卷进行缺陷检测,使得利用传统图像缺陷识别方法的检测结果准确性较差,且检测速度也无法得到提高。Therefore, in order to improve the detection efficiency of cellophane, some manufacturers will think of using machine vision detection instead of manpower to complete the detection steps in production; however, because cellophane has certain light transmission, and has the characteristics of different patterns and different colors As a result, the traditional image defect recognition algorithm cannot effectively detect defects for different color paper rolls, making the accuracy of the detection results using the traditional image defect recognition method poor, and the detection speed cannot be improved.
发明内容Contents of the invention
为了克服现有技术的不足,本发明的目的之一在于提供一种玻璃纸缺陷识别方法,提高了玻璃纸缺陷检测的效率和准确性。In order to overcome the deficiencies of the prior art, one of the objects of the present invention is to provide a cellophane defect identification method, which improves the efficiency and accuracy of cellophane defect detection.
本发明的目的之二在于提供一种执行上述方法的玻璃纸缺陷识别系统。The second object of the present invention is to provide a cellophane defect identification system that implements the above method.
本发明的目的之三在于提供一种执行上述方法的玻璃纸缺陷识别装置。The third object of the present invention is to provide a cellophane defect identification device that implements the above method.
本发明的目的之四在于提供一种执行上述方法的存储介质。The fourth object of the present invention is to provide a storage medium for performing the above method.
本发明的目的之一采用如下技术方案实现:One of purpose of the present invention adopts following technical scheme to realize:
一种玻璃纸缺陷识别方法,包括:A cellophane defect identification method, comprising:
步骤S1:采集玻璃纸表面图像,建立测试集数据;Step S1: collect cellophane surface images, and establish test set data;
步骤S2:将测试集数据导入基于已优化UNET网络的语义分割网络模型中进行语义分割;Step S2: Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
步骤S3:获取所述语义分割网络模型的输出信号,并对输出信号进行后处理以获得缺陷识别结果。Step S3: Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
进一步地,所述语义分割网络的构建方法包括:Further, the construction method of the semantic segmentation network includes:
根据采集所得的玻璃纸表面图像建立训练集数据;Establish training set data according to the collected cellophane surface images;
将所述训练集数据导入已优化的UNET网络中;Import the training set data into the optimized UNET network;
将所述UNET网络的输出数据与标签数据通过损失函数计算出损失值,其中所述标签数据为对所述训练集数据中的缺陷位置进行分割标注所获得;Calculate the loss value by using the output data and label data of the UNET network through a loss function, wherein the label data is obtained by segmenting and labeling defect positions in the training set data;
通过反向传播更新网络参数,直至模型收敛,则对模型的网络参数进行保存以获得训练好的语义分割网络模型。Update the network parameters through backpropagation until the model converges, then save the network parameters of the model to obtain a trained semantic segmentation network model.
进一步地,所述标签数据的获取方法为:Further, the acquisition method of the tag data is:
导入玻璃纸表面图像,根据需求框选相应的缺陷区域,对所选区域进行标签赋值,以获得标签数据。Import the surface image of the cellophane, select the corresponding defect area according to the requirement, and assign a label to the selected area to obtain the label data.
进一步地,所述UNET网络利用分组卷积模块对输入的feature map进行分组处理,再利用卷积模块分别对每组进行卷积;其中一个卷积模块采用的是1*1、1*3和3*1的组合卷积。Further, the UNET network uses the group convolution module to group the input feature map, and then uses the convolution module to convolve each group respectively; one of the convolution modules uses 1*1, 1*3 and 3*1 combined convolution.
进一步地,所述损失函数由交叉熵损失函数和dice loss度量函数组合而成。Further, the loss function is composed of a cross-entropy loss function and a dice loss measurement function.
进一步地,所述后处理方法为:Further, the post-processing method is:
根据预设的固定阈值对所述语义分割网络模型的输出信号进行二值化处理;Binarize the output signal of the semantic segmentation network model according to a preset fixed threshold;
再进行图像轮廓搜索并结合轮廓尺寸特征判断图像是否存在缺陷。Then search the image contour and judge whether there is a defect in the image combined with the contour size feature.
进一步地,所述步骤S1通过工业相机对玻璃纸表面进行拍摄以获得玻璃纸表面图像。Further, in the step S1, an industrial camera is used to photograph the surface of the cellophane to obtain an image of the cellophane surface.
本发明的目的之二采用如下技术方案实现:Two of the purpose of the present invention adopts following technical scheme to realize:
一种基于UNET网络的玻璃纸缺陷识别系统,包括:A cellophane defect identification system based on the UNET network, including:
采集模块,用于采集玻璃纸表面图像,建立测试集数据;The collection module is used to collect cellophane surface images and establish test set data;
模型分析模块,用于将测试集数据导入已构建的语义分割网络模型中进行语义分割;The model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation;
后处理模块,用于获取所述模型分析模块的输出信号,并对输出信号进行后处理以获得缺陷识别结果。The post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
本发明的目的之三采用如下技术方案实现:Three of the purpose of the present invention adopts following technical scheme to realize:
一种玻璃纸缺陷识别装置,其特征在于,包括:A cellophane defect identification device, characterized in that it comprises:
程序;program;
存储器,用于存储所述程序;a memory for storing the program;
处理器,用于加载所述程序以执行如上述的玻璃纸缺陷识别方法。The processor is configured to load the program to execute the method for identifying cellophane defects as described above.
本发明的目的之四采用如下技术方案实现:Four of the purpose of the present invention adopts following technical scheme to realize:
一种存储介质,其存储有程序,所述程序被处理器执行时实现如上述的玻璃纸缺陷识别方法。A storage medium stores a program, and when the program is executed by a processor, the above cellophane defect identification method is implemented.
相比现有技术,本发明的有益效果在于:Compared with the prior art, the beneficial effects of the present invention are:
本发明使用传统图像处理方法与语义分割网络相结合作为自动分类的依据,相比于单靠传统图像处理分析,有效地提高了玻璃纸缺陷识别的鲁棒性;同时,语义分割模型使用的是已进行优化的UNET网络,可快速检测玻璃纸缺陷,减少实际检测分析所需时间,提高玻璃纸缺陷检测的效率和准确性,降低了检测成本。The present invention uses the traditional image processing method combined with the semantic segmentation network as the basis for automatic classification, which effectively improves the robustness of cellophane defect recognition compared to relying solely on traditional image processing and analysis; at the same time, the semantic segmentation model uses the existing The optimized UNET network can quickly detect cellophane defects, reduce the time required for actual detection and analysis, improve the efficiency and accuracy of cellophane defect detection, and reduce detection costs.
附图说明Description of drawings
图1为本发明玻璃纸缺陷识别方法的流程示意图;Fig. 1 is the schematic flow chart of cellophane defect identification method of the present invention;
图2为本发明玻璃纸缺陷识别方法的测试和训练步骤的整体流程图;Fig. 2 is the overall flowchart of the testing and training steps of cellophane defect recognition method of the present invention;
图3为本发明UNET模型结构图;Fig. 3 is the structural diagram of the UNET model of the present invention;
图4为本发明分组卷积示意图;Fig. 4 is a schematic diagram of group convolution in the present invention;
图5为本发明1x1、3x1、1x3组合卷积示意图;Fig. 5 is a schematic diagram of 1x1, 3x1, 1x3 combined convolution of the present invention;
图6为本发明纯色纸卷缺陷预测结果图;Fig. 6 is a diagram of the defect prediction result of a solid-color paper roll according to the present invention;
图7为本发明彩色纸卷缺陷预测结果图;Fig. 7 is a diagram of the defect prediction results of colored paper rolls of the present invention;
图8为本发明玻璃纸缺陷识别系统的模块示意框图。Fig. 8 is a schematic block diagram of modules of the cellophane defect identification system of the present invention.
具体实施方式Detailed ways
下面,结合附图以及具体实施方式,对本发明做进一步描述,需要说明的是,在不相冲突的前提下,以下描述的各实施例之间或各技术特征之间可以任意组合形成新的实施例。Below, the present invention will be further described in conjunction with the accompanying drawings and specific implementation methods. It should be noted that, under the premise of not conflicting, the various embodiments described below or the technical features can be combined arbitrarily to form new embodiments. .
实施例一Embodiment one
本实施例提供一种玻璃纸缺陷识别方法,可代替现有的人工检测,提高玻璃纸缺陷检测的效率和准确性。This embodiment provides a cellophane defect identification method, which can replace the existing manual detection and improve the efficiency and accuracy of cellophane defect detection.
如图1、图2所示,本实施例的玻璃纸缺陷识别方法具体包括如下步骤:As shown in Figure 1 and Figure 2, the cellophane defect identification method of this embodiment specifically includes the following steps:
步骤S1:采集玻璃纸表面图像,建立测试集数据;Step S1: collect cellophane surface images, and establish test set data;
步骤S2:将测试集数据导入基于已优化UNET网络的语义分割网络模型中进行语义分割;Step S2: Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
步骤S3:获取所述语义分割网络模型的输出信号,并对输出信号进行后处理以获得缺陷识别结果。Step S3: Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
本实施例中通过工业相机对生产制造完成后的玻璃纸表面进行拍摄,从而获得玻璃纸表面图像,并建立相关原始图像的数据集,其中数据集划分为训练集数据和测试集数据,训练集为拍摄玻璃纸表面所得的若干张拍摄图像,将训练集数据作为语义分割网络模型的训练依据,从而构建完整的语义分割网络模型;而所述测试集同样为拍摄玻璃纸表面所得的图像,测试集则为测试目标,将测试集导入已经建立好的语义分割网络模型中,检测测试集中的图片是否存在缺陷,从而获得玻璃纸缺陷识别结果。In this embodiment, an industrial camera is used to photograph the surface of cellophane after production and manufacture, thereby obtaining an image of the surface of cellophane, and establishing a data set of related original images, wherein the data set is divided into training set data and test set data, and the training set is photographed Several photographed images obtained on the surface of cellophane, use the training set data as the training basis of the semantic segmentation network model, thereby constructing a complete semantic segmentation network model; The goal is to import the test set into the established semantic segmentation network model, detect whether there are defects in the pictures in the test set, and obtain the cellophane defect recognition result.
本实施例中获得训练集数据后,将其输入至轻量级UNET网络中进行训练,最终得到训练好的语义分割网络模型。而本实施例中轻量级UNET网络是在现有UNET网络的基础上进行优化所得;现有的UNET网络的网络结构包括编码器和解码器两个部分,其中编码器执行下采样过程,而解码器则执行上采样过程;而下采样部分有多个Down Block组成,其作用就是作为特征提取器,每个Down Block中包含两个卷积以及池化层,而池化层采用的Max Pooling的池化操作,每经过一个Down Block结构,特征图的空间尺寸都会减半。而现有的 UNET网络的上采样部分则是由Up Block组成,在Up Block结构中,包含有两个卷积和一个上采样层,用于逐层恢复特征图的尺寸。In this embodiment, after the training set data is obtained, it is input into the lightweight UNET network for training, and finally a trained semantic segmentation network model is obtained. In this embodiment, the lightweight UNET network is optimized on the basis of the existing UNET network; the network structure of the existing UNET network includes two parts, an encoder and a decoder, wherein the encoder performs a down-sampling process, and The decoder performs the upsampling process; the downsampling part consists of multiple Down Blocks, which function as feature extractors. Each Down Block contains two convolutional and pooling layers, and the Max Pooling layer used by the pooling layer The pooling operation, every time a Down Block structure is passed, the spatial size of the feature map will be halved. The upsampling part of the existing UNET network is composed of Up Block. In the Up Block structure, it contains two convolutions and one upsampling layer, which is used to restore the size of the feature map layer by layer.
图3所示的是本实施例轻量级UNET网络的结构示意图,如图3所示,本实施例中的轻量级UNET网络则是在现有UNET网络的结构上进行改进,本实施例对原有的传统卷积结构进行优化,现有的UNET网络中的每个Down Block结构采用的都是两个3*3卷积来完成下采样过程,而本实施例将Down Block结构改进为一个3*3卷积和一个组合卷积,并采用分组卷积方式,其组合卷积由1*1、1*3和3*1组成,同时本实施例压缩下采样和上采样次数,再缩减网络参数量。What Fig. 3 shows is the structural representation of the lightweight UNET network of the present embodiment, as shown in Fig. 3, the lightweight UNET network in the present embodiment is to improve on the structure of the existing UNET network, the present embodiment To optimize the original traditional convolution structure, each Down Block structure in the existing UNET network uses two 3*3 convolutions to complete the down-sampling process, and this embodiment improves the Down Block structure to A 3*3 convolution and a combination convolution, and adopts the group convolution method, and its combination convolution is composed of 1*1, 1*3 and 3*1, and this embodiment compresses the times of downsampling and upsampling, and then Reduce the amount of network parameters.
具体的,如图4所示,在本实施例的轻量级UNET网络的组合卷积模块中,玻璃纸图像经过卷积核处理后可获得图像的特征(feature map),对输入feature map进行分组,然后每组分别卷积。假设输入feature map的尺寸仍为C*H*W,输出feature map的数量为N个,如果设定要分成G个groups,则每组的输入feature map数量为C/G,每组的输出feature map数量为N/G,每个卷积核的尺寸为C/G*K*K,卷积核的总数仍为N个,每组的卷积核数量为N/G,卷积核只与其同组的输入map进行卷积,卷积核的总参数量为N*C/G*K*K,由此可见总参数量减少为原来的1/G。Specifically, as shown in Figure 4, in the combined convolution module of the lightweight UNET network of this embodiment, the cellophane image can be processed by the convolution kernel to obtain the feature map of the image, and the input feature map is grouped , and then each group is convolved separately. Assume that the size of the input feature map is still C*H*W, and the number of output feature maps is N. If it is set to be divided into G groups, the number of input feature maps for each group is C/G, and the output feature map for each group is The number of maps is N/G, the size of each convolution kernel is C/G*K*K, the total number of convolution kernels is still N, and the number of convolution kernels in each group is N/G, and the convolution kernel is only related to The input map of the same group is convolved, and the total parameter quantity of the convolution kernel is N*C/G*K*K, so it can be seen that the total parameter quantity is reduced to the original 1/G.
具体的,如图5所示,本实施例中将现有的3*3卷积分解为1*1、1*3和3*1的组合卷积,分解完后卷积参数则为未分解前的45%,可大幅度减少网络参数量,对现有的UNET网络进行了轻量化处理,进而提高玻璃纸缺陷识别的效率。另外,本实施例中在分解后多使用了三次激活函数,可增加分线性能力。Specifically, as shown in Figure 5, in this embodiment, the existing 3*3 convolution is decomposed into a combined convolution of 1*1, 1*3 and 3*1, and the convolution parameters after decomposition are undecomposed The previous 45% can greatly reduce the amount of network parameters, and carry out lightweight processing on the existing UNET network, thereby improving the efficiency of cellophane defect identification. In addition, in this embodiment, three more activation functions are used after decomposition, which can increase the linearization capability.
而本实施例中将训练集数据导入上述结构的已完成优化的轻量级UNET网 络中,即可利用训练集数据构建语义分割网络模型。本实施例的所述语义分割网络的构建方法包括:将所述训练集数据导入已优化的UNET网络中;训练集中的图像经过轻量级UNET网络处理后可获得语义分割后的图像,从语义分割后所获得的图像中可以看出玻璃纸表面的所有特征,该特征包括玻璃纸上的花纹和玻璃纸表面上的缺陷;其后,将所述UNET网络的输出数据与标签数据通过损失函数计算出损失值,通过反向传播更新网络参数,如果模型收敛,即损失值稳定到最低值,则对模型进行测试并保存最新的网络参数,用于后续测试时使用,否则,继续将训练集输入网络,直至网络收敛才保存网络参数,最终获得训练好的语义分割网络模型。In this embodiment, the training set data is imported into the optimized lightweight UNET network of the above structure, and the semantic segmentation network model can be constructed using the training set data. The construction method of the semantic segmentation network of the present embodiment includes: importing the training set data into the optimized UNET network; after the images in the training set are processed by the lightweight UNET network, the semantically segmented image can be obtained, from the semantic All the features of the cellophane surface can be seen in the image obtained after segmentation, including the pattern on the cellophane and the defects on the cellophane surface; then, the output data and label data of the UNET network are calculated through the loss function to calculate the loss value, update the network parameters through backpropagation, if the model converges, that is, the loss value stabilizes to the lowest value, then test the model and save the latest network parameters for subsequent testing, otherwise, continue to input the training set into the network, The network parameters are not saved until the network converges, and finally a trained semantic segmentation network model is obtained.
其中所述标签数据为对所述训练集数据中的缺陷位置进行分割标注所获得,即在获得训练集数据时,对预先导入的玻璃纸缺陷图像进行缺陷区域的标定,对所选区域进行标签赋值,重复上述过程直到标注完成,保存相应的标签即可获得标签数据。本实施例可通过labelme工具进行图像标注,即将玻璃纸图像导入labelme工具中,选择玻璃纸图像中出现缺陷的区域,对缺陷区域内的图像像素点进行赋值,以获得标签数据。Wherein the label data is obtained by segmenting and labeling the defect positions in the training set data, that is, when the training set data is obtained, the pre-imported cellophane defect image is calibrated for the defect area, and the selected area is assigned a label , repeat the above process until the labeling is completed, and save the corresponding label to obtain the label data. In this embodiment, the labelme tool can be used to perform image labeling, that is, import the cellophane image into the labelme tool, select a defective region in the cellophane image, and assign values to image pixels in the defective region to obtain label data.
本实施例将标签数据和轻量级UNET网络输出的信号通过损失函数计算出损失值,为了更好的评估模型的预测值和真实标签的匹配程度,本实施例的损失函数由交叉熵损失函数和dice loss度量函数组合而成。In this embodiment, the label data and the signal output by the lightweight UNET network are used to calculate the loss value through the loss function. In order to better evaluate the matching degree between the predicted value of the model and the real label, the loss function of this embodiment is composed of the cross entropy loss function It is combined with the dice loss metric function.
其中,交叉熵是从Kullback-Leibler(KL)散度推导出来的,它是衡量两种分布之间不同的度量。对于一般的机器学习任务,数据的分布是由训练集给出的。因此最小化KL散度等价于最小化交叉熵。交叉熵被定义为:Among them, the cross-entropy is derived from the Kullback-Leibler (KL) divergence, which is a measure of the difference between two distributions. For general machine learning tasks, the distribution of data is given by the training set. Therefore minimizing the KL divergence is equivalent to minimizing the cross-entropy. Cross entropy is defined as:
Figure PCTCN2021095962-appb-000001
Figure PCTCN2021095962-appb-000001
其中,N为样本数,如果标签c是像素i的正确分类,则是
Figure PCTCN2021095962-appb-000002
二值指标;
Figure PCTCN2021095962-appb-000003
是对应的预测概率。
where N is the number of samples, if label c is the correct classification for pixel i, then
Figure PCTCN2021095962-appb-000002
binary index;
Figure PCTCN2021095962-appb-000003
is the corresponding predicted probability.
而Dice loss度量函数旨在最小化ground truth G和预测分割区域S二者不匹配的区域,或者最大化G和S重叠区域。The Dice loss metric function aims to minimize the area where the ground truth G and the predicted segmentation area S do not match, or to maximize the area where G and S overlap.
Figure PCTCN2021095962-appb-000004
Figure PCTCN2021095962-appb-000004
其中,如果标签c是像素i的正确分类,则是
Figure PCTCN2021095962-appb-000005
二值指标;
Figure PCTCN2021095962-appb-000006
是对应的预测概率。
where, if label c is the correct classification for pixel i, then
Figure PCTCN2021095962-appb-000005
binary index;
Figure PCTCN2021095962-appb-000006
is the corresponding predicted probability.
通过上述方式获得训练好的语义分割网络模型后,使用训练好的模型对测试集中的玻璃纸表面图像进行语义分割,其后利用传统方法对网络输出信号进行后处理,判断该信号是否存在缺陷。进一步说明,对网络输出信号进行后处理的方法为:设置一个固定阈值对输出信号进行二值化处理,进行轮廓搜索并结合轮廓尺寸特征,从而识别出玻璃纸表面上的缺陷位置,从而判断该玻璃纸表面图像是否存在缺陷。After obtaining the trained semantic segmentation network model through the above method, use the trained model to perform semantic segmentation on the cellophane surface image in the test set, and then use traditional methods to post-process the network output signal to determine whether the signal has defects. To further explain, the method of post-processing the network output signal is: set a fixed threshold to binarize the output signal, perform contour search and combine the contour size features to identify the defect position on the cellophane surface, thereby judging the cellophane Whether the surface image is flawed.
如图6和图7所示,经过本实施例方法对不同花色的玻璃纸表面进行缺陷识别后,无论是纯色纸卷还是彩色纸卷都可准确地识别出玻璃纸表面缺陷位置,相比传统的人工检测,本方法提高了玻璃纸缺陷检测的效率和准确性,降低了检测成本。As shown in Figures 6 and 7, after the method of this embodiment is used to identify defects on the surface of cellophane in different colors, no matter whether it is a pure-color paper roll or a colored paper roll, the position of the defect on the surface of the cellophane can be accurately identified. detection, the method improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost.
实施例二Embodiment two
本实施例提供一种基于UNET网络的玻璃纸缺陷识别系统,执行实施例一所述的玻璃纸缺陷识别方法;如图8所示,本实施例的识别系统具体包括如下模块:This embodiment provides a cellophane defect recognition system based on the UNET network, which implements the cellophane defect recognition method described in Embodiment 1; as shown in Figure 8, the recognition system of this embodiment specifically includes the following modules:
采集模块,用于采集玻璃纸表面图像,建立测试集数据;The collection module is used to collect cellophane surface images and establish test set data;
模型分析模块,用于将测试集数据导入已构建的语义分割网络模型中进行语义分割;The model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation;
后处理模块,用于获取所述模型分析模块的输出信号,并对输出信号进行后处理以获得缺陷识别结果。The post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
本实施例对现有的UNET网络进行优化,可减少网络参数量,从而达到轻量级UNET网络的效果,从而提高图像处理速度,可快速检出玻璃纸缺陷,显著减少实际检测分析所需时间;其次,利用语义分割网络代替原有的人工检测方法,消除了人工检测和人工分析的主观性,提高了玻璃纸缺陷检测的效率和准确性,降低了检测成本;此外,本实施例使用传统图像处理方法与语义分割网络相结合作为自动分类的依据,相比于单靠传统图像处理分析,本实施例的检测方法查准率可提高至98%,同时还可有效地提高了玻璃纸缺陷识别的鲁棒性。This embodiment optimizes the existing UNET network, which can reduce the amount of network parameters, thereby achieving the effect of a lightweight UNET network, thereby increasing the image processing speed, quickly detecting cellophane defects, and significantly reducing the time required for actual detection and analysis; Secondly, the semantic segmentation network is used to replace the original manual detection method, which eliminates the subjectivity of manual detection and manual analysis, improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost; in addition, this embodiment uses traditional image processing The combination of the method and the semantic segmentation network is used as the basis for automatic classification. Compared with relying solely on traditional image processing and analysis, the accuracy of the detection method in this embodiment can be increased to 98%. Stickiness.
实施例三Embodiment Three
本实施例公开了一种玻璃纸缺陷识别装置,包括:This embodiment discloses a cellophane defect identification device, including:
程序;program;
存储器,用于存储所述程序;a memory for storing the program;
处理器,用于加载所述程序以执行实施例一所述的玻璃纸缺陷识别方法。The processor is configured to load the program to execute the cellophane defect identification method described in Embodiment 1.
本实施例公开了一种存储介质,其存储有程序,所述程序被处理器执行时实现所述的玻璃纸缺陷识别方法。This embodiment discloses a storage medium, which stores a program, and when the program is executed by a processor, the method for identifying cellophane defects is realized.
本实施例中的装置及存储介质与前述实施例中的方法是基于同一发明构思下的两个方面,在前面已经对方法实施过程作了详细的描述,所以本领域技术 人员可根据前述描述清楚地了解本实施中的装置的结构及实施过程,为了说明书的简洁,在此就不再赘述。上述实施方式仅为本发明的优选实施方式,不能以此来限定本发明保护的范围,本领域的技术人员在本发明的基础上所做的任何非实质性的变化及替换均属于本发明所要求保护的范围。The device and storage medium in this embodiment are based on the two aspects of the same inventive concept as the method in the previous embodiment. The implementation process of the method has been described in detail above, so those skilled in the art can understand clearly from the foregoing description To understand the structure and implementation process of the device in this implementation, for the sake of brevity of the description, details will not be repeated here. The above-mentioned embodiment is only a preferred embodiment of the present invention, and cannot be used to limit the protection scope of the present invention. Any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention belong to the scope of the present invention. Scope of protection claimed.
上述实施方式仅为本发明的优选实施方式,不能以此来限定本发明保护的范围,本领域的技术人员在本发明的基础上所做的任何非实质性的变化及替换均属于本发明所要求保护的范围。The above-mentioned embodiment is only a preferred embodiment of the present invention, and cannot be used to limit the protection scope of the present invention. Any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention belong to the scope of the present invention. Scope of protection claimed.

Claims (10)

  1. 一种玻璃纸缺陷识别方法,其特征在于,包括:A cellophane defect identification method, characterized in that, comprising:
    步骤S1:采集玻璃纸表面图像,建立测试集数据;Step S1: collect cellophane surface images, and establish test set data;
    步骤S2:将测试集数据导入基于已优化UNET网络的语义分割网络模型中进行语义分割;Step S2: Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
    步骤S3:获取所述语义分割网络模型的输出信号,并对输出信号进行后处理以获得缺陷识别结果。Step S3: Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
  2. 根据权利要求1所述的玻璃纸缺陷识别方法,其特征在于,所述语义分割网络的构建方法包括:The cellophane defect recognition method according to claim 1, wherein the construction method of the semantic segmentation network comprises:
    根据采集所得的玻璃纸表面图像建立训练集数据;Establish training set data according to the collected cellophane surface images;
    将所述训练集数据导入已优化的UNET网络中;Import the training set data into the optimized UNET network;
    将所述UNET网络的输出数据与标签数据通过损失函数计算出损失值,其中所述标签数据为对所述训练集数据中的缺陷位置进行分割标注所获得;Calculate the loss value by using the output data and label data of the UNET network through a loss function, wherein the label data is obtained by segmenting and labeling defect positions in the training set data;
    通过反向传播更新网络参数,直至模型收敛,则对模型的网络参数进行保存以获得训练好的语义分割网络模型。Update the network parameters through backpropagation until the model converges, then save the network parameters of the model to obtain a trained semantic segmentation network model.
  3. 根据权利要求2所述的玻璃纸缺陷识别方法,其特征在于,所述标签数据的获取方法为:The cellophane defect identification method according to claim 2, wherein the method for obtaining the label data is:
    导入玻璃纸表面图像,根据需求框选相应的缺陷区域,对所选区域进行标签赋值,以获得标签数据。Import the surface image of the cellophane, select the corresponding defect area according to the requirement, and assign a label to the selected area to obtain the label data.
  4. 根据权利要求2所述的玻璃纸缺陷识别方法,其特征在于,所述UNET网络利用分组卷积模块对输入的feature map进行分组处理,再利用卷积模块分 别对每组进行卷积;其中一个卷积模块采用的是1*1、1*3和3*1的组合卷积。The cellophane defect identification method according to claim 2, wherein the UNET network uses a group convolution module to group the input feature map, and then uses the convolution module to convolve each group respectively; one of the convolution modules The product module uses a combined convolution of 1*1, 1*3 and 3*1.
  5. 根据权利要求2所述的玻璃纸缺陷识别方法,其特征在于,所述损失函数由交叉熵损失函数和dice loss度量函数组合而成。The cellophane defect recognition method according to claim 2, wherein the loss function is formed by combining a cross-entropy loss function and a dice loss measurement function.
  6. 根据权利要求1所述的玻璃纸缺陷识别方法,其特征在于,所述后处理的方法为:Cellophane defect identification method according to claim 1, is characterized in that, the method for described aftertreatment is:
    根据预设的固定阈值对所述语义分割网络模型的输出信号进行二值化处理;Binarize the output signal of the semantic segmentation network model according to a preset fixed threshold;
    再进行图像轮廓搜索并结合轮廓尺寸特征判断图像是否存在缺陷。Then search the image contour and judge whether there is a defect in the image combined with the contour size feature.
  7. 根据权利要求1所述的玻璃纸缺陷识别方法,其特征在于,所述步骤S1通过工业相机对玻璃纸表面进行拍摄以获得玻璃纸表面图像。The cellophane defect identification method according to claim 1, characterized in that, in the step S1, an industrial camera is used to photograph the surface of the cellophane to obtain an image of the cellophane surface.
  8. 一种基于UNET网络的玻璃纸缺陷识别系统,其特征在于,包括:A cellophane defect identification system based on the UNET network, characterized in that it comprises:
    采集模块,用于采集玻璃纸表面图像,建立测试集数据;The collection module is used to collect cellophane surface images and establish test set data;
    模型分析模块,用于将测试集数据导入已构建的语义分割网络模型中进行语义分割;The model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation;
    后处理模块,用于获取所述模型分析模块的输出信号,并对输出信号进行后处理以获得缺陷识别结果。The post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
  9. 一种玻璃纸缺陷识别装置,其特征在于,包括:A cellophane defect identification device, characterized in that it comprises:
    程序;program;
    存储器,用于存储所述程序;a memory for storing the program;
    处理器,用于加载所述程序以执行如权利要求1-7任一项所述的玻璃纸缺陷 识别方法。A processor is used to load the program to execute the cellophane defect identification method according to any one of claims 1-7.
  10. 一种存储介质,其存储有程序,其特征在于,所述程序被处理器执行时实现如权利要求1-7任一项所述的玻璃纸缺陷识别方法。A storage medium, which stores a program, wherein when the program is executed by a processor, the cellophane defect identification method according to any one of claims 1-7 is implemented.
PCT/CN2021/095962 2021-05-14 2021-05-26 Cellophane defect recognition method, system and apparatus, and storage medium WO2022236876A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110528838.8 2021-05-14
CN202110528838.8A CN113239930B (en) 2021-05-14 2021-05-14 Glass paper defect identification method, system, device and storage medium

Publications (1)

Publication Number Publication Date
WO2022236876A1 true WO2022236876A1 (en) 2022-11-17

Family

ID=77134417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095962 WO2022236876A1 (en) 2021-05-14 2021-05-26 Cellophane defect recognition method, system and apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN113239930B (en)
WO (1) WO2022236876A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152807A (en) * 2023-04-14 2023-05-23 广东工业大学 Industrial defect semantic segmentation method based on U-Net network and storage medium
CN116664846A (en) * 2023-07-31 2023-08-29 华东交通大学 Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
CN116664586A (en) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 Glass defect detection method and system based on multi-mode feature fusion
CN116703834A (en) * 2023-05-22 2023-09-05 浙江大学 Method and device for judging and grading excessive sintering ignition intensity based on machine vision
CN117011300A (en) * 2023-10-07 2023-11-07 山东特检科技有限公司 Micro defect detection method combining instance segmentation and secondary classification
CN117237361A (en) * 2023-11-15 2023-12-15 苏州拓坤光电科技有限公司 Grinding control method and system based on residence time algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782387A (en) * 2022-04-29 2022-07-22 苏州威达智电子科技有限公司 Surface defect detection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555831A (en) * 2019-08-29 2019-12-10 天津大学 Drainage pipeline defect segmentation method based on deep learning
CN110738660A (en) * 2019-09-09 2020-01-31 五邑大学 Spine CT image segmentation method and device based on improved U-net
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network
CN111612789A (en) * 2020-06-30 2020-09-01 征图新视(江苏)科技股份有限公司 Defect detection method based on improved U-net network
CN112686261A (en) * 2020-12-24 2021-04-20 广西慧云信息技术有限公司 Grape root system image segmentation method based on improved U-Net
CN112766110A (en) * 2021-01-08 2021-05-07 重庆创通联智物联网有限公司 Training method of object defect recognition model, object defect recognition method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2793886B1 (en) * 1999-05-19 2001-06-22 Arjo Wiggins Sa SUBSTRATE HAVING MAGNETIC MARKING, METHOD FOR MANUFACTURING SAID SUBSTRATE, AND DEVICE USING THE SAME
DE102013109005A1 (en) * 2013-08-20 2015-02-26 Khs Gmbh Device and method for identifying codes under film
CN111507343B (en) * 2019-01-30 2021-05-18 广州市百果园信息技术有限公司 Training of semantic segmentation network and image processing method and device thereof
CN110992317B (en) * 2019-11-19 2023-09-22 佛山市南海区广工大数控装备协同创新研究院 PCB defect detection method based on semantic segmentation
CN110910368B (en) * 2019-11-20 2022-05-13 佛山市南海区广工大数控装备协同创新研究院 Injector defect detection method based on semantic segmentation
CN111127416A (en) * 2019-12-19 2020-05-08 武汉珈鹰智能科技有限公司 Computer vision-based automatic detection method for surface defects of concrete structure
CN111369550B (en) * 2020-03-11 2022-09-30 创新奇智(成都)科技有限公司 Image registration and defect detection method, model, training method, device and equipment
CN111932501A (en) * 2020-07-13 2020-11-13 太仓中科信息技术研究院 Seal ring surface defect detection method based on semantic segmentation
CN112215803B (en) * 2020-09-15 2022-07-12 昆明理工大学 Aluminum plate eddy current inspection image defect segmentation method based on improved generation countermeasure network
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555831A (en) * 2019-08-29 2019-12-10 天津大学 Drainage pipeline defect segmentation method based on deep learning
CN110738660A (en) * 2019-09-09 2020-01-31 五邑大学 Spine CT image segmentation method and device based on improved U-net
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network
CN111612789A (en) * 2020-06-30 2020-09-01 征图新视(江苏)科技股份有限公司 Defect detection method based on improved U-net network
CN112686261A (en) * 2020-12-24 2021-04-20 广西慧云信息技术有限公司 Grape root system image segmentation method based on improved U-Net
CN112766110A (en) * 2021-01-08 2021-05-07 重庆创通联智物联网有限公司 Training method of object defect recognition model, object defect recognition method and device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152807A (en) * 2023-04-14 2023-05-23 广东工业大学 Industrial defect semantic segmentation method based on U-Net network and storage medium
CN116152807B (en) * 2023-04-14 2023-09-05 广东工业大学 Industrial defect semantic segmentation method based on U-Net network and storage medium
CN116703834A (en) * 2023-05-22 2023-09-05 浙江大学 Method and device for judging and grading excessive sintering ignition intensity based on machine vision
CN116703834B (en) * 2023-05-22 2024-01-23 浙江大学 Method and device for judging and grading excessive sintering ignition intensity based on machine vision
CN116664846A (en) * 2023-07-31 2023-08-29 华东交通大学 Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
CN116664846B (en) * 2023-07-31 2023-10-13 华东交通大学 Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
CN116664586A (en) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 Glass defect detection method and system based on multi-mode feature fusion
CN116664586B (en) * 2023-08-02 2023-10-03 长沙韶光芯材科技有限公司 Glass defect detection method and system based on multi-mode feature fusion
CN117011300A (en) * 2023-10-07 2023-11-07 山东特检科技有限公司 Micro defect detection method combining instance segmentation and secondary classification
CN117011300B (en) * 2023-10-07 2023-12-12 山东特检科技有限公司 Micro defect detection method combining instance segmentation and secondary classification
CN117237361A (en) * 2023-11-15 2023-12-15 苏州拓坤光电科技有限公司 Grinding control method and system based on residence time algorithm
CN117237361B (en) * 2023-11-15 2024-02-02 苏州拓坤光电科技有限公司 Grinding control method and system based on residence time algorithm

Also Published As

Publication number Publication date
CN113239930B (en) 2024-04-05
CN113239930A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
WO2022236876A1 (en) Cellophane defect recognition method, system and apparatus, and storage medium
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN108520274B (en) High-reflectivity surface defect detection method based on image processing and neural network classification
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
CN113643268B (en) Industrial product defect quality inspection method and device based on deep learning and storage medium
CN112070727B (en) Metal surface defect detection method based on machine learning
CN112819748B (en) Training method and device for strip steel surface defect recognition model
CN114581782B (en) Fine defect detection method based on coarse-to-fine detection strategy
CN113222982A (en) Wafer surface defect detection method and system based on improved YOLO network
CN112132196A (en) Cigarette case defect identification method combining deep learning and image processing
CN115829995A (en) Cloth flaw detection method and system based on pixel-level multi-scale feature fusion
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN113177924A (en) Industrial production line product flaw detection method
CN114331961A (en) Method for defect detection of an object
CN112884741B (en) Printing apparent defect detection method based on image similarity comparison
CN111932639B (en) Detection method of unbalanced defect sample based on convolutional neural network
CN111161228B (en) Button surface defect detection method based on transfer learning
CN117214178A (en) Intelligent identification method for appearance defects of package on packaging production line
CN110136098B (en) Cable sequence detection method based on deep learning
CN115661126A (en) Strip steel surface defect detection method based on improved YOLOv5 algorithm
CN112200762A (en) Diode glass bulb defect detection method
CN114898148B (en) Egg offset detection method and system based on deep learning
CN117078608B (en) Double-mask guide-based high-reflection leather surface defect detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941445

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE