CN110689539A - A detection method for workpiece surface defects based on deep learning - Google Patents

A detection method for workpiece surface defects based on deep learning Download PDF

Info

Publication number
CN110689539A
CN110689539A CN201910993517.8A CN201910993517A CN110689539A CN 110689539 A CN110689539 A CN 110689539A CN 201910993517 A CN201910993517 A CN 201910993517A CN 110689539 A CN110689539 A CN 110689539A
Authority
CN
China
Prior art keywords
workpiece
deep learning
convolutional
image
workpiece surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910993517.8A
Other languages
Chinese (zh)
Other versions
CN110689539B (en
Inventor
王健
陈原
刘席发
高博文
吕琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910993517.8A priority Critical patent/CN110689539B/en
Publication of CN110689539A publication Critical patent/CN110689539A/en
Application granted granted Critical
Publication of CN110689539B publication Critical patent/CN110689539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

本发明公开了一种基于深度学习的工件表面缺陷检测方法,利用深度学习技术,构建了工件表面缺陷检测系统,目的在于解决传统方法中人力开销大、系统效率低、适应性差的缺点,能够在生产环境下对工件表面缺陷进行快速识别、反馈,保证了系统的准确度与效率。系统通过图像采集装置对工件表面进行图像捕获,经由捕获终端预处理后,上传至处理计算机。处理计算机将调用基于深度神经网络模型的预测器对图像进行识别,并输出预测向量。最后,处理中心将预测向量发布到显示终端,使得工件表面缺陷状态得到直观展示。

Figure 201910993517

The invention discloses a workpiece surface defect detection method based on deep learning. A workpiece surface defect detection system is constructed by using the deep learning technology. Rapid identification and feedback of workpiece surface defects in the production environment ensures the accuracy and efficiency of the system. The system captures the surface of the workpiece through the image acquisition device, and uploads it to the processing computer after preprocessing by the capture terminal. The processing computer will call the predictor based on the deep neural network model to identify the image and output the predicted vector. Finally, the processing center publishes the prediction vector to the display terminal, so that the surface defect status of the workpiece can be visually displayed.

Figure 201910993517

Description

一种基于深度学习的工件表面缺陷检测方法A detection method for workpiece surface defects based on deep learning

技术领域technical field

本发明涉及机器视觉与检测技术领域,具体涉及一种基于深度学习的工件表面缺陷检测方法。The invention relates to the technical field of machine vision and detection, in particular to a workpiece surface defect detection method based on deep learning.

背景技术Background technique

质量控制是产业升级的重要一环。以工件缺陷检测为例,目前,许多厂家仍以人力为主导,这种方式不仅效率低下、而且提高了人力成本。秉着制造信息化的导向,以机器视觉代替人眼视觉以参与产品质量控制已成为当今制造业的发展趋势,而目前也有一些厂商开始尝试采取传统的机器视觉方案。目前已经应用的技术方案主有基于背景建模的前景检测算法和基于支持向量机的学习算法。其中,基于背景建模的前景检测算法效率依赖于捕获图像尺度,即运算量与输入尺度呈现出幂次增长的关系,当捕获图像的分辨率提高往往造成系统效率急剧下降;基于支持向量机的学习算法相比于基于背景建模的前景检测算法表现出系统效率方面的提升,但由于支持向量机的空间消耗主要用于存储训练样本和核矩阵,当矩阵阶数(与输入样本数有关)很大时将耗费大量的存储空间及机器内存。另外,支持向量机解决多分类问题较为困难。Quality control is an important part of industrial upgrading. Taking workpiece defect detection as an example, at present, many manufacturers are still dominated by manpower, which is not only inefficient, but also increases labor costs. Adhering to the orientation of manufacturing informatization, replacing human vision with machine vision to participate in product quality control has become the development trend of today's manufacturing industry, and some manufacturers have begun to try to adopt traditional machine vision solutions. The technical solutions that have been applied so far mainly include foreground detection algorithms based on background modeling and learning algorithms based on support vector machines. Among them, the efficiency of the foreground detection algorithm based on background modeling depends on the scale of the captured image, that is, the amount of computation and the input scale show a power-increasing relationship. When the resolution of the captured image increases, the system efficiency often drops sharply. Compared with the foreground detection algorithm based on background modeling, the learning algorithm shows an improvement in system efficiency, but because the space consumption of the support vector machine is mainly used to store training samples and kernel matrices, when the matrix order (related to the number of input samples) When it is very large, it will consume a lot of storage space and machine memory. In addition, support vector machines are more difficult to solve multi-classification problems.

近年来深度神经网络(深度学习)随着商业化的巨大成功而迎来了发展的井喷期。其于智能化行业中被广泛应用,为人类社会带来巨大的经济效益。卷积神经网络(Convolutional Neural Networks,CNN)作为深度神经网络的典型,相比于传统算法,对于复杂特征拥有更好的提取能力,被广泛应用于机器视觉与图像处理等领域。其相比于传统机器视觉算法,若参数优化得当,拥有更好的适应性与准确度,且系统效率大幅提升。而相比于基于支持向量机的学习算法,其计算存储开销小、可用于解决多分类问题。因此,有必要开发一种基于深度学习的高效、准确的工件缺陷检测技术,应用于质量控制环节。In recent years, deep neural networks (deep learning) have ushered in a blowout period of development with the great success of commercialization. It is widely used in the intelligent industry and brings huge economic benefits to the human society. Convolutional Neural Networks (CNN), as a typical deep neural network, have better ability to extract complex features than traditional algorithms, and are widely used in machine vision and image processing and other fields. Compared with traditional machine vision algorithms, if the parameters are properly optimized, it has better adaptability and accuracy, and the system efficiency is greatly improved. Compared with the learning algorithm based on support vector machine, its computational and storage cost is small, and it can be used to solve multi-classification problems. Therefore, it is necessary to develop an efficient and accurate workpiece defect detection technology based on deep learning for quality control.

发明内容SUMMARY OF THE INVENTION

发明目的:为了解决上述背景技术中存在的问题,本发明利用深度学习技术,构建了工件表面缺陷检测系统,解决传统方法中人力开销大、系统效率低、适应性差的缺点,能够在生产环境下对工件表面缺陷进行快速识别、反馈,保证了系统的准确度与效率。技术方案:为实现上述目的,本发明采用的技术方案为:Purpose of the invention: In order to solve the problems existing in the above background technology, the present invention uses deep learning technology to construct a workpiece surface defect detection system, solves the shortcomings of traditional methods such as high labor cost, low system efficiency, and poor adaptability, and can be used in production environments. Rapid identification and feedback of workpiece surface defects ensures the accuracy and efficiency of the system. Technical scheme: In order to realize the above-mentioned purpose, the technical scheme adopted in the present invention is:

一种基于深度学习的工件表面缺陷检测方法,包括以下具体步骤:A method for detecting surface defects of workpieces based on deep learning, comprising the following specific steps:

步骤1、采用多组摄像头与光照源构建基于多目视觉的图像采集系统,捕捉工件在不用角度下的图像;Step 1. Use multiple sets of cameras and light sources to construct an image acquisition system based on multi-eye vision to capture images of the workpiece at different angles;

步骤2、构建分布式图像处理系统;采用管道监听机制,处理计算机维护特定的消息管道,捕获终端监听该管道;每次图像采集时捕获终端将对采集到的图像进行批量预处理,随后上传至数据库;利用持久化机制,处理计算机能够将预处理后的图片保存至本地以作为分类器的输入;Step 2: Constructing a distributed image processing system; using a pipeline monitoring mechanism, the processing computer maintains a specific message pipeline, and the capture terminal monitors the pipeline; the capture terminal will preprocess the collected images in batches during each image acquisition, and then upload them to the system. Database; using the persistence mechanism, the processing computer can save the preprocessed image locally as the input of the classifier;

步骤3、采集训练样本;将准备好的、已标注的含部分缺陷的工件放入采集系统,进行训练数据采集;按照8:2的比例设置训练与验证集合,并设置标签;Step 3. Collect training samples; put the prepared and marked workpieces with partial defects into the collection system to collect training data; set training and verification sets according to the ratio of 8:2, and set labels;

步骤4、构建深度学习模型;基于卷积神经网络构建包含卷积层和全连接层的网络模型,层与层之间复合了池化、正则化、归一化模块、用于优化特征提取、提高非线性度;Step 4. Build a deep learning model; build a network model including a convolutional layer and a fully connected layer based on a convolutional neural network. Pooling, regularization, and normalization modules are combined between layers to optimize feature extraction, improve nonlinearity;

步骤5、构建预测器程序框架;采用Tensorflow API构建检测系统框架;基于工厂模式构建程序逻辑,便于系统软件的升级、扩展与优化;Step 5. Build the predictor program framework; use the Tensorflow API to build the detection system framework; build the program logic based on the factory mode, which is convenient for the upgrade, expansion and optimization of the system software;

步骤6、将待检测工件放置于图像采集系统内,并运行预测器程序;在线运行时,预测器程序接口将接收已预处理的工件捕获图像作为输入;随后调用已训练好的静态模型输出预测向量;Step 6. Place the workpiece to be detected in the image acquisition system, and run the predictor program; when running online, the predictor program interface will receive the preprocessed workpiece capture image as input; then call the trained static model to output the prediction vector;

步骤7、预测器将预测向量消息通过预先设定的频道发布到显示终端。显示终端将根据预测向量自动标识出工件缺陷所在位置。Step 7: The predictor publishes the prediction vector message to the display terminal through a preset channel. The display terminal will automatically identify the location of the workpiece defect according to the predicted vector.

进一步地,所述步骤2中对采集图像进行批量预处理包含转换捕获图片的颜色空间;捕获终端将输入图像的颜色空间有RGB制式转换为YUV制式。Further, performing batch preprocessing on the captured images in the step 2 includes converting the color space of the captured images; the capture terminal converts the color space of the input image from RGB format to YUV format.

进一步地,所述步骤4中网络模型构建步骤具体如下:将数据集存放于训练脚本指定目录下,运行训练脚本,初始学习率设置为0.001,采用Adam优化器;训练模型按次序各的具体参数分别为:Further, the network model construction steps in the step 4 are as follows: the data set is stored in the specified directory of the training script, the training script is run, the initial learning rate is set to 0.001, and the Adam optimizer is used; the specific parameters of the training model are in order They are:

(1)卷积层1,共计64个卷积核,卷积核大小为11×11,步长设置4,填充模式设置为SAME;激活函数设置为ReLU;复合2×2池化并执行局部响应归一化;(1) Convolutional layer 1, a total of 64 convolution kernels, the size of the convolution kernel is 11×11, the stride is set to 4, and the filling mode is set to SAME; the activation function is set to ReLU; composite 2×2 pooling and perform local response normalization;

(2)卷积层2,共计256个卷积核,卷积核大小为5×5,填充模式设置为SAME,激活函数设置为ReLU,复合2×2池化并执行局部响应归一化;(2) Convolutional layer 2, a total of 256 convolution kernels, the size of the convolution kernel is 5×5, the padding mode is set to SAME, the activation function is set to ReLU, compound 2×2 pooling and perform local response normalization;

(3)卷积层3,共计256个卷积核,卷积核大小为3×3,填充模式设置为SAME,激活函数设置为ReLU,复合2×2池化并执行局部响应归一化;(3) Convolution layer 3, a total of 256 convolution kernels, the size of the convolution kernel is 3×3, the padding mode is set to SAME, the activation function is set to ReLU, compound 2×2 pooling and perform local response normalization;

(4)全连接层1,映射为4096维;(4) Fully connected layer 1, mapped to 4096 dimensions;

(5)全连接层2,映射为N维,其中N代表标签数目;损失函数为softmax;(5) Fully connected layer 2, mapped to N dimension, where N represents the number of labels; the loss function is softmax;

经过参数调整,最终得到优化后的动态模型;运行转换脚本后得到.pb格式的静态模型。After parameter adjustment, the optimized dynamic model is finally obtained; after running the conversion script, the static model in .pb format is obtained.

进一步地,步骤1中多组摄像头优选为数字摄像头。Further, the multiple groups of cameras in step 1 are preferably digital cameras.

进一步地,步骤1中光照源优选为LED光源。Further, in step 1, the light source is preferably an LED light source.

有益效果:本发明针对传统工件表面缺陷检测人力资源消耗大、效率低下的问题,提出了一种基于深度学习的工件表面缺陷检测方法。基于多目视觉的图像采集系统保证了推断精度,而基于分布式的图像处理系统大幅提升了系统效率与鲁棒性;通过扩充训练集的方式提高了预测器网络模型的泛化能力,保证了系统准确度。因此大幅降低了人力成本,在保证准确性的情况下提升了系统效率。Beneficial effects: Aiming at the problems of large consumption of human resources and low efficiency in traditional workpiece surface defect detection, the present invention proposes a workpiece surface defect detection method based on deep learning. The image acquisition system based on multi-vision vision ensures the inference accuracy, while the distributed image processing system greatly improves the system efficiency and robustness; the generalization ability of the predictor network model is improved by expanding the training set, ensuring system accuracy. Therefore, the labor cost is greatly reduced, and the system efficiency is improved while ensuring the accuracy.

附图说明Description of drawings

图1是本发明一种基于深度学习的工件表面缺陷检测方法主流程图;Fig. 1 is the main flow chart of a kind of workpiece surface defect detection method based on deep learning of the present invention;

图2是基于多目视觉的图像采集系统空间方位示意图;Figure 2 is a schematic diagram of the spatial orientation of an image acquisition system based on multi-eye vision;

图3是预测器软件流程示意图;Fig. 3 is a schematic diagram of predictor software flow;

图4是捕获终端簇状分布连接示意图;Fig. 4 is a schematic diagram of capturing terminal cluster distribution connection;

图5是预测器网络模型图;Fig. 5 is a predictor network model diagram;

图6是本发明一种基于深度学习的工件表面缺陷检测系统结构图;6 is a structural diagram of a workpiece surface defect detection system based on deep learning of the present invention;

图7是基于深度学习的工件表面缺陷检测系统具体实施步骤图。FIG. 7 is a diagram showing the specific implementation steps of a workpiece surface defect detection system based on deep learning.

具体实施方式Detailed ways

下面结合附图对本发明作更进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings.

下面提供一种具体实施例:A specific embodiment is provided below:

本发明所述的基于深度学习的工件表面缺陷检测方法分为以下三个模块:图像采集与预处理模块、预测器模块和通信与数据持久化模块。The deep learning-based workpiece surface defect detection method of the present invention is divided into the following three modules: an image acquisition and preprocessing module, a predictor module, and a communication and data persistence module.

图像采集与预处理模块:Image acquisition and preprocessing module:

(1)图像采集模块:(1) Image acquisition module:

图像采集系统设计为立方体容器,从空间方向上看,包含上下左右前后共计6个面。然而,由于底面用于固定工件,故实际只有5个面需要检测。按照待检测面可以分为四种角度:侧面视角(即前后左右)、侧棱视角(前左棱、前右棱、左后棱、右后棱)、上侧面视角(顶面)、上侧棱视角(上前棱、上后棱、上左棱、上右棱)。受视野所限,侧面与侧棱视角需要在空间布局上分为上下个捕捉点,故共需要(4+4)×2+4+1=21个摄像头。本实施例中采用树莓派作为终端,并搭载摄像头。The image acquisition system is designed as a cubic container, and from the perspective of space, it includes a total of 6 surfaces including up, down, left, right, front, and rear. However, since the bottom surface is used to hold the workpiece, there are actually only 5 surfaces to be inspected. According to the surface to be detected, it can be divided into four angles: side view angle (ie, front, back, left and right), side edge view angle (front left edge, front right edge, left rear edge, right rear edge), upper side view angle (top surface), upper side Edge angle (upper front edge, upper rear edge, upper left edge, upper right edge). Due to the limited field of view, the side and side edge perspectives need to be divided into upper and lower capture points in the spatial layout, so a total of (4+4)×2+4+1=21 cameras are required. In this embodiment, a Raspberry Pi is used as the terminal and is equipped with a camera.

如图2所示,其中数字1、4、5表示可视面,数字2、3、6表示当前的不可视面。各数字意义为1:前面2:后面3:左面4:右面5:顶面6:底面。各棱标识可以由两个面唯一确定,由两位数字所构成,代表生成该棱线的两个相交面,分别为1-3;前左棱1-4;前右棱3-2;左后棱4-2;右后棱5-1;上前棱5-2;上后棱5-3;上左棱5-4;上右棱。As shown in Figure 2, the numbers 1, 4, and 5 represent the visible surfaces, and the numbers 2, 3, and 6 represent the current invisible surfaces. The meaning of each number is 1: front 2: back 3: left 4: right 5: top 6: bottom. Each edge identification can be uniquely determined by two faces, consisting of two digits, representing the two intersecting faces that generate the edge, respectively 1-3; front left edge 1-4; front right edge 3-2; left Posterior edge 4-2; right posterior edge 5-1; upper anterior edge 5-2; upper posterior edge 5-3; upper left edge 5-4; upper right edge.

(2)光源选型与光照控制(2) Light source selection and lighting control

光源是图像采集的重要前提,采集图像作为缺陷检测系统的输入,其质量好坏很大程度直接决定了预测结果的准确性。合适的光源不仅能够放大缺陷细节,甚至能够屏蔽掉干扰特征,从而降低检测样本的噪声。The light source is an important prerequisite for image acquisition. The quality of the acquired image as the input of the defect detection system directly determines the accuracy of the prediction result to a large extent. A suitable light source can not only magnify defect details, but even mask out interfering features, thereby reducing the noise of inspected samples.

光源种类可分为自然光与人工光,由于图像采集系统是封闭构型,因此本系统选择人工光源。鉴于LED具有寿命长、功耗低的特性,本实施例中选用LED作为光照发生器。The types of light sources can be divided into natural light and artificial light. Since the image acquisition system is a closed configuration, the artificial light source is selected for this system. Considering that the LED has the characteristics of long life and low power consumption, the LED is selected as the illumination generator in this embodiment.

(3)摄像头选型(3) Camera selection

摄像头主要分为模拟摄像头与数字摄像头两类,其中模拟摄像头较为简单,但由于计算机仅能识别数字信号,所以成像结果在输入计算机前需要经DA转换。而数字摄像头直接采集数字信号,不需经由DA转换电路即可输入到计算机终端。两者相比,数字摄像头多用于短距离、低干扰的空间环境下。考虑到图像采集系统的空间尺度较小,为便于系统设计,故采用数字摄像头。Cameras are mainly divided into two categories: analog cameras and digital cameras. Among them, analog cameras are relatively simple, but because the computer can only recognize digital signals, the imaging results need to be converted by DA before being input into the computer. The digital camera directly collects digital signals, which can be input to the computer terminal without going through the DA conversion circuit. Compared with the two, digital cameras are mostly used in short-distance, low-interference space environments. Considering the small spatial scale of the image acquisition system, a digital camera is used for the convenience of system design.

预测器模块:Predictor module:

(1)预测器硬件设计(1) Predictor hardware design

预测器主要由包括显示模块与数据处理主机。其中,处理主机操作系统平台为linux,硬件平台搭载有高性能的GPU与CPU。The predictor mainly consists of a display module and a data processing host. Among them, the operating system platform of the processing host is linux, and the hardware platform is equipped with high-performance GPU and CPU.

(2)预测器软件设计(2) Predictor software design

如图3所示,预测器软件主要由三部分所构成:输入模块、预测器模块、输出模块。各模块的运行机制与功能为:As shown in Figure 3, the predictor software is mainly composed of three parts: an input module, a predictor module, and an output module. The operating mechanism and functions of each module are:

输入模块:提供检测样本输入的统一接口,利用CV2所提供的CVMat_to_TensorAPI将预处理完备的原始图像统一压缩到模型的输入层尺寸(224×224),并转换为Tensor作为模型的输入。Input module: Provides a unified interface for detecting sample input, and uses CVMat_to_TensorAPI provided by CV2 to uniformly compress the preprocessed original image to the input layer size (224×224) of the model, and convert it to Tensor as the input of the model.

预测器模块:载入已训练优化后的静态模型(.pb格式),将输入层节点名与输出层节点名设置为与静态网络模型中对应节点名相同,并调用相应API以运行网络。Predictor module: Load the trained and optimized static model (.pb format), set the node name of the input layer and the node name of the output layer to be the same as the corresponding node names in the static network model, and call the corresponding API to run the network.

输出模块:预测模块执行完毕后,将产生一个tensor类型的队列(预测向量),利用argmax方法提取出每向量中的最大预测分量,作为预测结果。本系统采取的是二分类方式,即预测向量有两个维度,[1,0]表示工件表面存在缺陷,而[0,1]则表示工件表面完好。表1列出了若干与预测器相关的重要函数:Output module: After the prediction module is executed, it will generate a tensor type queue (prediction vector), and use the argmax method to extract the largest prediction component in each vector as the prediction result. The system adopts a binary classification method, that is, the prediction vector has two dimensions, [1,0] indicates that the workpiece surface is defective, and [0,1] indicates that the workpiece surface is intact. Table 1 lists several important functions related to predictors:

表1Table 1

Figure BDA0002239037340000051
Figure BDA0002239037340000051

Figure BDA0002239037340000061
Figure BDA0002239037340000061

通信与数据持久化模块:Communication and data persistence module:

(1)通信系统构建(1) Communication system construction

如图4所示,本方法利用的基础通信方式实现为桥接模式。实现方式为,将一个终端上外扩的若干网口相互连通。即由一个树莓作为“簇头”与其余四个树莓派相连,形成“簇状结构”,再由“簇头”与交换机连通。As shown in FIG. 4 , the basic communication mode utilized by this method is implemented as a bridge mode. The implementation method is to connect a plurality of network ports expanded on a terminal to each other. That is, one raspberry is connected to the other four Raspberry Pis as a "cluster head" to form a "cluster structure", and then the "cluster head" is connected to the switch.

(2)数据持久化(2) Data persistence

本实施例采用Redis数据库实现数据持久化。Redis是一种分布式key-value数据库,其具有性能高、数据类型丰富、支持原子性的特点,从而保证系统的高效性。In this embodiment, the Redis database is used to implement data persistence. Redis is a distributed key-value database, which has the characteristics of high performance, rich data types, and support for atomicity, thus ensuring the efficiency of the system.

系统主要维护两个管道,分别为Channel@1和Channel@2。其功能分别为:The system mainly maintains two pipelines, Channel@1 and Channel@2. Its functions are:

Channel@1作为上位机与图像采集终端间的消息通道:上位机周期性发出“图像采集”指令,所有监听了该管道的采集终端将调用捕捉脚本,对工件图像进行捕捉,经由预处理步骤后,将图像以二进制串的形式存储在数据库中。采集完毕后,执行持久化,上位机获取当前工件各个角度捕获的图像块,作为预测器的原始输入。Channel@1 is used as the message channel between the host computer and the image acquisition terminal: the host computer periodically sends out the "image acquisition" command, and all acquisition terminals monitoring the pipeline will call the capture script to capture the workpiece image. After the preprocessing step , which stores the image in the database as a binary string. After the acquisition is completed, persistence is performed, and the upper computer obtains the image blocks captured from various angles of the current workpiece as the original input of the predictor.

Channel@2作为上位机与显示终端间的消息通道:当预测器输出结果,数据处理主机将预测向量通过此频道向显示终端发布。在接收预测向量消息后,显示终端将调用解析器对预测向量进行解析,以图像的形式反馈预测结果。Channel@2 is used as the message channel between the host computer and the display terminal: when the predictor outputs the result, the data processing host will release the prediction vector to the display terminal through this channel. After receiving the prediction vector message, the display terminal will call the parser to parse the prediction vector, and feed back the prediction result in the form of an image.

下面结合图1-7给出本发明的详细实施例。Detailed embodiments of the present invention are given below with reference to FIGS. 1-7 .

步骤1、以树莓派作为捕获终端的平台,外扩摄像头作为图像捕获器。在图像采集容器内表面的侧面、侧棱、上侧面、上侧棱角度布置共计21个数字摄像头。考率捕获面的尺度,需要构建线光源,平均分布于采集容器的内表面。Step 1. Use the Raspberry Pi as the platform for capturing the terminal, and the external camera as the image capturer. A total of 21 digital cameras are arranged on the side, side edge, upper side and upper side edge of the inner surface of the image acquisition container. Considering the size of the capture surface, it is necessary to construct a line light source, which is evenly distributed on the inner surface of the collection container.

步骤2、在处理计算机开启Redis服务,设置好监听网络段与端口号(预设为6379端口)。开启Redis服务后,处理计算机运行命令发布进程,在其中设置两个消息管道Channel@1与Channel@2。随后,批量远程连接至捕获终端与显示终端,开启捕获终端与显示终端的监听进程,捕获终端监听管道Channel@1,显示终端监听管道Channel@2。开始采集训练图像。每当捕获终端采集到一帧原始图像(大小为2952×1944),将调用脚本对其进行背景填充与分割。得到大小为800×800的分割块后,将RGB制式的图像转换为YUV制式,并以二进制串的形式存储至数据库。Step 2. Start the Redis service on the processing computer, and set the listening network segment and port number (the default is port 6379). After the Redis service is started, the processing computer runs the command publishing process, and sets two message pipes Channel@1 and Channel@2 in it. Then, remotely connect to the capture terminal and the display terminal in batches, start the monitoring process of the capture terminal and the display terminal, the capture terminal monitors the channel Channel@1, and the display terminal monitors the channel Channel@2. Start acquiring training images. Whenever the capture terminal captures a frame of original image (size is 2952×1944), the script will be called to fill and segment the background. After obtaining a segmented block with a size of 800×800, convert the image in RGB format to YUV format, and store it in the database in the form of a binary string.

步骤3、处理计算机执行持久化操作,以获取捕获终端采集的图像。根据采集到的工件图像块缺陷显示状况,进行人工标注。在采集了100个工件,共计5300个图像块后,批量进行图像增强与变形,将训练集扩充到约20000。Step 3: The processing computer performs a persistence operation to acquire the image captured by the capture terminal. Manual annotation is performed according to the defect display status of the collected workpiece image blocks. After collecting 100 workpieces and a total of 5,300 image blocks, image enhancement and deformation are performed in batches, and the training set is expanded to about 20,000.

步骤4、将数据集存放于训练脚本指定目录下,运行训练脚本,初始学习率设置为0.001,采用Adam优化器;训练模型按次序各的具体参数分别为:Step 4. Store the data set in the directory specified by the training script, run the training script, set the initial learning rate to 0.001, and use the Adam optimizer; the specific parameters of the training model in order are:

(1)卷积层1,共计64个卷积核,卷积核大小为11×11,步长设置4,填充模式设置为SAME;激活函数设置为ReLU;复合2×2池化并执行局部响应归一化;(1) Convolutional layer 1, a total of 64 convolution kernels, the size of the convolution kernel is 11×11, the stride is set to 4, and the filling mode is set to SAME; the activation function is set to ReLU; composite 2×2 pooling and perform local response normalization;

(2)卷积层2,共计256个卷积核,卷积核大小为5×5,填充模式设置为SAME,激活函数设置为ReLU,复合2×2池化并执行局部响应归一化;(2) Convolutional layer 2, a total of 256 convolution kernels, the size of the convolution kernel is 5×5, the padding mode is set to SAME, the activation function is set to ReLU, compound 2×2 pooling and perform local response normalization;

(3)卷积层3,共计256个卷积核,卷积核大小为3×3,填充模式设置为SAME,激活函数设置为ReLU,复合2×2池化并执行局部响应归一化;(3) Convolution layer 3, a total of 256 convolution kernels, the size of the convolution kernel is 3×3, the padding mode is set to SAME, the activation function is set to ReLU, compound 2×2 pooling and perform local response normalization;

(4)全连接层1,映射为4096维;(4) Fully connected layer 1, mapped to 4096 dimensions;

(5)全连接层2,映射为N维,其中N代表标签数目;损失函数为softmax;(5) Fully connected layer 2, mapped to N dimension, where N represents the number of labels; the loss function is softmax;

经过参数调整,最终得到优化后的动态模型;运行转换脚本后得到.pb格式的静态模型,如图5所示。After parameter adjustment, the optimized dynamic model is finally obtained; after running the conversion script, the static model in .pb format is obtained, as shown in Figure 5.

步骤5、修改预测器程序的配置文件,指定输入图像的路径与静态模型的存储路径。设置完毕后,运行预测器程序,加载静态模型。Step 5. Modify the configuration file of the predictor program, and specify the path of the input image and the storage path of the static model. Once set up, run the predictor program to load the static model.

步骤6、预测器程序成功加载静态模型后,即进入预测模式。将待检测工件放置于采集容器内,并在处理计算机上向捕获终端发布采集指令。接收到采集指令后,捕获终端将自动对工件进行图像捕获。经由预处理后的图像块存储至数据库内,与此同时,处理计算机间歇执行持久化。Step 6. After the predictor program successfully loads the static model, it enters the prediction mode. The workpiece to be detected is placed in the collection container, and the collection instruction is issued to the capture terminal on the processing computer. After receiving the acquisition instruction, the capture terminal will automatically capture the image of the workpiece. The preprocessed image blocks are stored in the database, and at the same time, the processing computer performs persistence intermittently.

步骤7、一旦预测器读取到图像块输入,将利用库函数将图像数据转换为Tensor作为网络模型的输入,随后运行网络。网络输出预测向量后,处理计算机将该信息传递给运行于后台的消息发布进程,经由Channel@2将预测向量发布到显示终端。接收预测向量后,显示终端将以图形化的形式反馈待检测工件的缺陷状况。Step 7. Once the predictor reads the image block input, it will use the library function to convert the image data into Tensor as the input of the network model, and then run the network. After the network outputs the prediction vector, the processing computer transmits the information to the message publishing process running in the background, and publishes the prediction vector to the display terminal via Channel@2. After receiving the prediction vector, the display terminal will feed back the defect status of the workpiece to be inspected in a graphical form.

以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only the preferred embodiment of the present invention, it should be pointed out that: for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made, and these improvements and modifications are also It should be regarded as the protection scope of the present invention.

Claims (5)

1. A workpiece surface defect detection method based on deep learning is characterized in that: the method comprises the following steps:
step 1, constructing an image acquisition system based on multi-view vision by adopting a plurality of groups of cameras and illumination sources, and capturing images of a workpiece at different angles;
step 2, constructing a distributed image processing system; a pipeline monitoring mechanism is adopted, a processing computer maintains a specific message pipeline, and a capture terminal monitors the pipeline; the method comprises the steps that a capture terminal carries out batch preprocessing on collected images during image collection each time and then uploads the preprocessed images to a database; by utilizing a persistence mechanism, the processing computer can store the preprocessed pictures to the local as the input of the classifier;
step 3, collecting training samples; putting the prepared and marked workpiece containing partial defects into an acquisition system for training data acquisition; according to the following steps of 8: 2, setting a training and verification set according to the proportion, and setting a label;
step 4, constructing a deep learning model; constructing a network model comprising convolutional layers and full-connection layers based on a convolutional neural network, wherein pooling, regularization and normalization modules are compounded among the layers and used for optimizing feature extraction and improving nonlinearity;
step 5, constructing a predictor program framework; constructing a detection system framework by adopting a Tensorflow API; program logic is constructed based on a factory mode, so that system software is convenient to upgrade, expand and optimize;
step 6, placing the workpiece to be detected in an image acquisition system, and operating a predictor program; when the online operation is carried out, the predictor program interface takes the workpiece capture image which is received and preprocessed as input; then, calling the trained static model to output a prediction vector;
step 7, the predictor distributes the prediction vector information to the display terminal through a preset channel; and the display terminal automatically identifies the position of the workpiece defect according to the prediction vector.
2. The workpiece surface defect detection method based on deep learning of claim 1, wherein: in the step 2, the collected images are subjected to batch preprocessing, including converting the color space of the captured pictures; the capture terminal converts the color space of the input image into a YUV system in an RGB system mode.
3. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the network model construction step in the step 4 is specifically as follows: storing the data set in a specified directory of a training script, operating the training script, setting the initial learning rate to be 0.001, and adopting an Adam optimizer; the specific parameters of the training model in sequence are respectively as follows:
(1) convolutional layer 1, with a total of 64 convolutional kernels, the size of the convolutional kernels is 11 × 11, the step size is set to 4, and the fill mode is set to SAME; the activation function is set to ReLU; compounding 2 × 2 pooling and performing local response normalization;
(2) convolutional layer 2, with a total of 256 convolutional kernels, with the convolutional kernel size 5 × 5, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(3) convolutional layer 3, totaling 256 convolutional kernels, with the convolutional kernel size of 3 × 3, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(4) fully connected layer 1, mapped as 4096 dimensions;
(5) a fully connected layer 2, mapped to N dimensions, where N represents the number of tags; the loss function is softmax;
finally obtaining an optimized dynamic model through parameter adjustment; and after the conversion script is operated, obtaining the static model in the pb format.
4. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the multiple groups of cameras in the step 1 are preferably digital cameras.
5. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the illumination source in the step 1 is preferably an LED light source.
CN201910993517.8A 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning Active CN110689539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910993517.8A CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910993517.8A CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110689539A true CN110689539A (en) 2020-01-14
CN110689539B CN110689539B (en) 2023-04-07

Family

ID=69113507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993517.8A Active CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110689539B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830048A (en) * 2020-07-17 2020-10-27 苏州凌创电子系统有限公司 Automobile fuel spray nozzle defect detection equipment based on deep learning and detection method thereof
CN111858361A (en) * 2020-07-23 2020-10-30 中国人民解放军国防科技大学 An Atomicity Violation Defect Detection Method Based on Prediction and Parallel Verification Strategy
CN111951234A (en) * 2020-07-27 2020-11-17 上海微亿智造科技有限公司 Model detection method
CN112017172A (en) * 2020-08-31 2020-12-01 佛山科学技术学院 System and method for detecting defects of deep learning product based on raspberry group
CN113486457A (en) * 2021-06-04 2021-10-08 宁波海天金属成型设备有限公司 Die casting defect prediction and diagnosis system
CN115382685A (en) * 2022-08-16 2022-11-25 苏州智涂工业科技有限公司 Control technology of automatic robot spraying production line
CN115496763A (en) * 2022-11-21 2022-12-20 湖南视比特机器人有限公司 Workpiece wrong and neglected loading detection system and method based on multi-view vision
CN117011263A (en) * 2023-08-03 2023-11-07 东方空间技术(山东)有限公司 Defect detection method and device for rocket sublevel recovery section
CN117250200A (en) * 2023-11-07 2023-12-19 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004191112A (en) * 2002-12-10 2004-07-08 Ricoh Co Ltd Defect examining method
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004191112A (en) * 2002-12-10 2004-07-08 Ricoh Co Ltd Defect examining method
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830048A (en) * 2020-07-17 2020-10-27 苏州凌创电子系统有限公司 Automobile fuel spray nozzle defect detection equipment based on deep learning and detection method thereof
CN111858361B (en) * 2020-07-23 2023-07-21 中国人民解放军国防科技大学 An Atomicity Violation Defect Detection Method Based on Prediction and Parallel Verification Strategy
CN111858361A (en) * 2020-07-23 2020-10-30 中国人民解放军国防科技大学 An Atomicity Violation Defect Detection Method Based on Prediction and Parallel Verification Strategy
CN111951234A (en) * 2020-07-27 2020-11-17 上海微亿智造科技有限公司 Model detection method
CN111951234B (en) * 2020-07-27 2021-07-30 上海微亿智造科技有限公司 Model detection method
CN112017172A (en) * 2020-08-31 2020-12-01 佛山科学技术学院 System and method for detecting defects of deep learning product based on raspberry group
CN113486457A (en) * 2021-06-04 2021-10-08 宁波海天金属成型设备有限公司 Die casting defect prediction and diagnosis system
CN115382685A (en) * 2022-08-16 2022-11-25 苏州智涂工业科技有限公司 Control technology of automatic robot spraying production line
CN115496763A (en) * 2022-11-21 2022-12-20 湖南视比特机器人有限公司 Workpiece wrong and neglected loading detection system and method based on multi-view vision
CN117011263A (en) * 2023-08-03 2023-11-07 东方空间技术(山东)有限公司 Defect detection method and device for rocket sublevel recovery section
CN117011263B (en) * 2023-08-03 2024-05-10 东方空间技术(山东)有限公司 A defect detection method and device for rocket sub-stage recovery section
CN117250200A (en) * 2023-11-07 2023-12-19 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision
CN117250200B (en) * 2023-11-07 2024-02-02 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision

Also Published As

Publication number Publication date
CN110689539B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110689539B (en) Workpiece surface defect detection method based on deep learning
CN115829999A (en) Insulator defect detection model generation method, device, equipment and storage medium
CN111242844B (en) Image processing method, device, server and storage medium
CN110852182A (en) A deep video human behavior recognition method based on 3D spatial time series modeling
CN113076819A (en) Fruit identification method and device under homochromatic background and fruit picking robot
CN113076992A (en) Household garbage detection method and device
WO2022017197A1 (en) Intelligent product quality inspection method and apparatus
CN111767826B (en) Anomaly detection method for fixed-point scenes
CN114565942A (en) Live pig face detection method based on compressed YOLOv5
CN117372410A (en) Surface image recognition method and system for aluminum alloy castings
CN117011614A (en) Wild ginseng reed body detection and quality grade classification method and system based on deep learning
JP2022013579A (en) Method and apparatus for processing image, electronic device, and storage medium
CN114972246A (en) A deep learning-based surface defect detection method for die-cutting products
CN111507325A (en) Industrial visual OCR recognition system and method based on deep learning
CN114332659A (en) Power transmission line defect inspection method and device based on lightweight model issuing
CN118379296B (en) A circular bushing defect detection method and system based on visual neural network
CN118644469A (en) A method for concrete surface defect detection and management based on deep learning and augmented reality
CN110717960A (en) A method for generating remote sensing image samples of construction waste
CN116091472A (en) An Intelligent Detection Method for Photovoltaic Module Defects by Fusion of Visible Light and Infrared Images
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN114037646A (en) IoT-based intelligent image detection method, system, readable medium, and device
CN112712124B (en) Multi-module cooperative object recognition system and method based on deep learning
CN117711016B (en) Gesture recognition method and system based on terminal equipment
CN109060831A (en) A kind of automatic dirty detection method based on bottom plate fitting
CN118504645B (en) Multi-mode large model training method, robot motion prediction method and processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant