CN110136098A - A deep learning-based cable sequence detection method - Google Patents

A deep learning-based cable sequence detection method Download PDF

Info

Publication number
CN110136098A
CN110136098A CN201910299722.4A CN201910299722A CN110136098A CN 110136098 A CN110136098 A CN 110136098A CN 201910299722 A CN201910299722 A CN 201910299722A CN 110136098 A CN110136098 A CN 110136098A
Authority
CN
China
Prior art keywords
cable
deep learning
detection method
order
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910299722.4A
Other languages
Chinese (zh)
Other versions
CN110136098B (en
Inventor
汪钰人
刘国海
沈继锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910299722.4A priority Critical patent/CN110136098B/en
Publication of CN110136098A publication Critical patent/CN110136098A/en
Application granted granted Critical
Publication of CN110136098B publication Critical patent/CN110136098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深度学习的线缆顺序检测方法,包括以下步骤:步骤1:首先对线缆图像进行灰度化,为后续顺序检测做基础;步骤2:对图像某一行的相邻像素点灰度值进行差分运算,选取一个阈值和差分结果比较,输出定位结果图。步骤3:针对三色线缆图像的特点,提出了简化的特征提取网络和优化的锚框、ELU激活函数对Faster R‑CNN改进;步骤4:采用改进的Faster R‑CNN目标检测算法进行线缆顺序检测,并统计准确率和检测时间。本发明方法能够更加充分的提取三色线缆的特征,具有操作方便、效率高等优点,节省了很多的人力消耗,所采用的改进算法能同时减少了线缆的检测时间,减少误检,错检并大大提升了线缆检测的准确率。

The invention discloses a cable sequence detection method based on deep learning. The difference operation is performed on the gray value of the pixel point, a threshold is selected and the difference result is compared, and the positioning result map is output. Step 3: According to the characteristics of the three-color cable image, a simplified feature extraction network, an optimized anchor frame, and an ELU activation function are proposed to improve Faster R‑CNN; Step 4: Use the improved Faster R‑CNN target detection algorithm for line Sequence detection of cables, and statistics of accuracy and detection time. The method of the invention can more fully extract the characteristics of the three-color cable, has the advantages of convenient operation and high efficiency, saves a lot of manpower consumption, and the improved algorithm adopted can simultaneously reduce the cable detection time, reduce false detection, error It can greatly improve the accuracy of cable detection.

Description

一种基于深度学习的线缆顺序检测方法A deep learning-based cable sequence detection method

技术领域technical field

本发明涉及图像处理领域,具体设计一种基于深度学习的线缆顺序检测方法。The invention relates to the field of image processing, and specifically designs a cable sequence detection method based on deep learning.

背景技术Background technique

现有的线缆顺序检测都是需要很多的人工检测。采取这种方式会消耗大量的人力资源,人工进行检查会影响检测的效率,成本高昂,而且采用高强度的人工检查容易造成误检,错检,不能满足如今工业自动化生产的需求;已有的线缆检测算法缺乏通用性,检测结果并不理想,实用性不是很高,对企业的产品品质和生产效率造成一定的影响。Existing cable sequence testing requires a lot of manual testing. Taking this method will consume a lot of human resources, manual inspection will affect the efficiency of detection, and the cost will be high, and the use of high-intensity manual inspection is likely to cause false detection and false detection, which cannot meet the needs of today's industrial automation production; existing The cable detection algorithm lacks versatility, the detection results are not ideal, and the practicability is not very high, which has a certain impact on the product quality and production efficiency of the enterprise.

发明内容SUMMARY OF THE INVENTION

针对上述背景技术中的不足,本发明为解决现有的技术问题而提供一种快速、精准的基于深度学习的线缆顺序检测方法。In view of the deficiencies in the above background technology, the present invention provides a fast and accurate deep learning-based cable sequence detection method in order to solve the existing technical problems.

本发明的技术方案如下:一种基于深度学习的线缆顺序检测方法,包括如下步骤:The technical scheme of the present invention is as follows: a deep learning-based cable sequence detection method, comprising the following steps:

步骤1:首先对三色线缆图像进行灰度化,为后续顺序检测作基础;步骤2:对图像某一行的相邻像素点灰度值进行差分运算,选取一个阈值和差分结果比较,输出定位结果图;步骤3:针对三色线缆图像的特点,提出了简化的特征提取网络和优化的锚框、ELU激活函数对Faster R-CNN改进;步骤4:将改进后的Faster R-CNN算法进行线缆顺序检测,并统计准确率和检测时间。Step 1: First, grayscale the three-color cable image, which is the basis for subsequent sequential detection; Step 2: Perform a differential operation on the gray values of adjacent pixels in a certain line of the image, select a threshold and compare the difference results, and output Positioning result map; Step 3: According to the characteristics of the three-color cable image, a simplified feature extraction network, an optimized anchor frame, and an ELU activation function are proposed to improve Faster R-CNN; Step 4: The improved Faster R-CNN is improved The algorithm performs cable sequence detection, and counts the accuracy rate and detection time.

进一步,所述的步骤2中,在三色线缆区域的灰度图中,取某一行像素点的灰度值。对这一行相邻的像素灰度值进行差分运算。Further, in the step 2, in the grayscale image of the three-color cable area, the grayscale value of a certain row of pixels is taken. The difference operation is performed on the gray value of adjacent pixels in this row.

进一步,所述的步骤2中,设置阈值为5,统计出大于阈值5的像素点的坐标,最后输出大于阈值5的坐标并输出三色线缆区域的定位结果图。Further, in the step 2, the threshold is set to 5, the coordinates of the pixels larger than the threshold 5 are counted, and finally the coordinates larger than the threshold 5 are output and the positioning result map of the three-color cable area is output.

进一步,所述的步骤3简化的特征提取网络,具体采用简化的共享卷积层网络来提取特征,分别删除了共享卷积层网络中卷积层3和卷积层4中的一个卷积层。Further, in the simplified feature extraction network in step 3, a simplified shared convolutional layer network is used to extract features, and one convolutional layer in the convolutional layer 3 and convolutional layer 4 in the shared convolutional layer network is deleted respectively. .

进一步,所述的步骤3中对锚框做了一些优化,设计的锚框尽可能把线缆区域包括在里面,优化的锚框采用1∶1.5、1∶1、1∶2的锚框提取目标区域特征。Further, some optimizations are made to the anchor frame in the said step 3, the designed anchor frame includes the cable area as much as possible, and the optimized anchor frame adopts the anchor frame extraction of 1:1.5, 1:1, 1:2 target area features.

进一步,所述的步骤3,在Fast R-CNN网络中,采用ELU函数代替ReLU函数。Further, in the step 3, in the Fast R-CNN network, the ELU function is used instead of the ReLU function.

进一步,线缆图片共4000张,将线缆图片分为两个图片集:2800张作为训练集,1200张作为测试集Further, there are 4000 cable images in total, and the cable images are divided into two image sets: 2800 as the training set and 1200 as the test set

1.首先进行线缆图像的灰度化,把三色线缆图像转换为灰度图像;1. First, grayscale the cable image, and convert the three-color cable image into a grayscale image;

2.取图像某一行的灰度值并对相邻像素点的灰度值进行差分运算;2. Take the gray value of a certain line of the image and perform a differential operation on the gray values of adjacent pixels;

3.经过实验获得一个合适的阈值m,将第2步得到的结果和阈值比较,输出大于阈值的坐标并输出线缆定位结果图。3. Obtain a suitable threshold m through experiments, compare the result obtained in step 2 with the threshold, output the coordinates greater than the threshold, and output the cable positioning result map.

4.提出了改进的Faster R-CNN目标检测算法,选取了更符合线缆图像特点的锚框以及简化的特征提取网络;4. An improved Faster R-CNN target detection algorithm is proposed, an anchor frame that is more in line with the characteristics of cable images and a simplified feature extraction network are selected;

5.采用Faster R-CNN目标检测算法进行线缆顺序的检测。最后采取mAP指标来对模型的效果进行评价。5. The Faster R-CNN target detection algorithm is used to detect the cable sequence. Finally, mAP index is used to evaluate the effect of the model.

本发明具有以下有益效果:本发明方法采用改进的Faster R-CNN目标检测算法进行线缆顺序检测,首先对线缆感兴趣区域进行定位,然后用简化的卷积特征提取网络、优化的锚框、ELU激活函数对Faster R-CNN算法进行改进。本发明方法能够更加充分的提取三色线缆的特征,具有操作方便、效率高等优点,节省了很多的人力消耗。The invention has the following beneficial effects: the method of the invention adopts the improved Faster R-CNN target detection algorithm to detect the cable sequence, firstly locates the cable region of interest, and then uses the simplified convolution feature extraction network and optimized anchor frame , ELU activation function to improve the Faster R-CNN algorithm. The method of the invention can more fully extract the characteristics of the three-color cable, has the advantages of convenient operation, high efficiency, and saves a lot of manpower consumption.

本发明分别删除了共享卷积层网络中conv3和conv4中的conv3_4层和conv4_4层,同时优化的锚框采用长宽比为1∶1.5、1∶1、1∶2的锚框提取目标区域特征,将阈值m=5时,三色线缆区域定位效果较理想,所采用改进的巧妙之处在于,能同时减少了线缆的检测时间,大大提升了检测的速度;此外,还能够有效减少误检,错检并大大提升了线缆检测的准确率。The invention deletes the conv3_4 layer and the conv4_4 layer in the conv3 and conv4 in the shared convolutional layer network respectively, and the optimized anchor frame adopts the anchor frame with the aspect ratio of 1:1.5, 1:1, 1:2 to extract the target area feature , when the threshold value m=5, the positioning effect of the three-color cable area is ideal. The cleverness of the improvement is that it can reduce the cable detection time at the same time, and greatly improve the detection speed; in addition, it can also effectively reduce the False detection, false detection and greatly improve the accuracy of cable detection.

进一步的效果,将共享卷积层的卷积层和池化层做了一定的改进,分别删除了共享卷积层网络中conv3和conv4中的conv3_4层和conv4_4层,这样还能够带来提高模型的效率的作用;采用ELU激活函数代替原有的ReLU激活函数,函数图像左边有软饱和性,有助于提升噪声鲁棒性,ELU函数的特性会使得输出均值接近为零,加快收敛的速度。For further effects, the convolutional layer and pooling layer of the shared convolutional layer have been improved to a certain extent, and the conv3_4 and conv4_4 layers in conv3 and conv4 in the shared convolutional layer network are respectively deleted, which can also improve the model. The ELU activation function is used to replace the original ReLU activation function. The left side of the function image has soft saturation, which helps to improve the noise robustness. The characteristics of the ELU function will make the output mean close to zero and speed up the convergence. .

进一步的效果,针对原始的Faster R-CNN采用长宽比2∶1、1∶1、1∶2的锚框,优化的锚框采用长宽比1∶1.5、1∶1、1∶2的锚框提取目标区域特征,这样设计的锚框还可以更加充分地提取图像的特征。For further effects, for the original Faster R-CNN, anchor boxes with aspect ratios of 2:1, 1:1, and 1:2 are used, and the optimized anchor boxes use aspect ratios of 1:1.5, 1:1, and 1:2. The anchor frame extracts the features of the target area, and the anchor frame designed in this way can also more fully extract the features of the image.

附图说明Description of drawings

图1为Faster R-CNN算法网络结构图;Figure 1 is the network structure diagram of Faster R-CNN algorithm;

图2为ReLU函数图;Figure 2 is the ReLU function diagram;

图3为共享卷积层;Figure 3 shows the shared convolution layer;

图4为正确顺序的线缆图像。Figure 4 is an image of the cables in the correct order.

图5为错误顺序的线缆图像。Figure 5 is an image of cables in the wrong order.

图6为灰度化后的图像。Figure 6 is the grayscaled image.

图7为进行线缆区域定位后的图像。FIG. 7 is an image after positioning the cable area.

图8为提出的优化锚框示意图。Figure 8 is a schematic diagram of the proposed optimized anchor box.

图9为ELU函数图;Fig. 9 is the ELU function diagram;

图10为采用改进Faster R-CNN算法检测的结果图。Figure 10 shows the result of detection using the improved Faster R-CNN algorithm.

具体实施方式Detailed ways

下面简单介绍一下和本发明相关的Faster R-CNN算法网络结构原理。The following briefly introduces the network structure principle of the Faster R-CNN algorithm related to the present invention.

如图1所示,为Faster R-CNN算法网络结构图。Faster R-CNN算法主要由两个模块组成:原始图像经过共享卷积层,分别进入两个网络模块,第一个是Fast R-CNN网络模块内依次连接的特征层,ROI层,全连接层,最后处理后的图片通过分类或者回归;第二个是区域建议网络RPN模块内依次连接的特征图,低秩向量,最后处理后的图片通过分类或者回归;Fast R-CNN网络模块,主要目的是对区域生成网络生成的候选区域检测并识别候选区域中的目标类别;区域建议网络RPN模块,主要目的是生成候选区域的。区域建议网络接在共享卷积层后一层卷积层上,在RPN网络的最后一层卷积层采用一个滑动窗口网络,每个滑动窗口的位置与原图对应,在原图中对应的位置即为目标的区域建议框,这些区域建议框被称为锚框。锚框的包含三种不同的面积{128*128,256*256,512*512},不同的长宽比为{1∶1,1∶2,2∶1}。As shown in Figure 1, it is the network structure diagram of Faster R-CNN algorithm. The Faster R-CNN algorithm is mainly composed of two modules: the original image passes through the shared convolution layer and enters two network modules respectively. The first one is the feature layer, ROI layer, and fully connected layer connected in turn in the Fast R-CNN network module. , the final processed image is classified or regressed; the second is the feature map connected in turn in the RPN module of the region proposal network, a low-rank vector, and the final processed image is classified or regressed; the Fast R-CNN network module, the main purpose It is to detect and identify the target category in the candidate region generated by the region generation network; the region proposal network RPN module, the main purpose is to generate the candidate region. The region proposal network is connected to the convolutional layer after the shared convolutional layer, and a sliding window network is used in the last convolutional layer of the RPN network. The position of each sliding window corresponds to the original image, and the corresponding position in the original image That is, the regional proposal boxes for the target, and these regional proposal boxes are called anchor boxes. The anchor box contains three different areas {128*128, 256*256, 512*512}, and the different aspect ratios are {1:1, 1:2, 2:1}.

如图2所示,为Fast R-CNN网络模块中的ReLU激活函数,ReLU函数表达式为:As shown in Figure 2, it is the ReLU activation function in the Fast R-CNN network module. The ReLU function expression is:

f(x)=max(0,x)f(x)=max(0,x)

x为神经网络节点的输入,f(x)为神经网络节点的输入输出;x is the input of the neural network node, and f(x) is the input and output of the neural network node;

如图3所示,此共享卷积层是一个有16层的模型:它一共有五段卷积即卷积层1到卷积层5,每段卷积之后紧接着池化层。卷积层1和卷积层2分别含有2个卷积层,卷积层3到卷积层5分别含有3个卷积层,还有全连接层1,全连接层2,全连接层3。卷积层间使用了池化方法,这样处理主要是保留提取的主要特征,同时减少下一层的参数和计算量,防止过拟合。最后用分类器进行分类。As shown in Figure 3, this shared convolutional layer is a model with 16 layers: it has a total of five convolutions, namely convolutional layer 1 to convolutional layer 5, and each convolution is followed by a pooling layer. Convolutional layer 1 and convolutional layer 2 respectively contain 2 convolutional layers, convolutional layer 3 to convolutional layer 5 contain 3 convolutional layers respectively, and fully connected layer 1, fully connected layer 2, fully connected layer 3 . The pooling method is used between the convolutional layers, which is mainly to retain the main features of the extraction, while reducing the parameters and calculation amount of the next layer to prevent overfitting. Finally, the classifier is used for classification.

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.

在本实施例中提供了一种线缆顺序的检测方法,包括以下步骤:In this embodiment, a method for detecting cable sequence is provided, which includes the following steps:

线缆图像集共4000张图片,将线缆图片分为两个图片集:2800张作为训练集,1200张作为测试集。The cable image set has a total of 4000 images, and the cable images are divided into two image sets: 2800 as the training set and 1200 as the test set.

1.如图4、5所示,图4中为正确顺序的线缆图像,图5中为错误顺序的芯线图像。如图6所示,首先将彩色线缆图像转化为灰度图像,灰度图像比彩色图像像素信息变少,减少了计算量;1. As shown in Figures 4 and 5, Figure 4 is the cable image in the correct order, and Figure 5 is the core wire image in the wrong order. As shown in Figure 6, the color cable image is first converted into a grayscale image, and the grayscale image has less pixel information than the color image, which reduces the amount of calculation;

2.在三色线缆区域的灰度图中,取某一行像素点的灰度值。对这一行相邻的像素灰度值进行差分运算,由此得到一些差分值;2. In the grayscale image of the three-color cable area, take the grayscale value of a row of pixels. Perform a differential operation on the adjacent pixel grayscale values of this row to obtain some differential values;

3.经过实验得出阈值m=5时,三色线缆区域定位效果较理想,因此设置阈值为5。将此阈值和第二步得到的差分值进行比较,统计出大于阈值m的像素点的坐标,最后输出大于阈值m的坐标并输出三色线缆区域的定位结果图,结果图如图7所示;3. When the threshold m=5 is obtained through experiments, the positioning effect of the three-color cable area is ideal, so the threshold is set to 5. Compare this threshold with the difference value obtained in the second step, count the coordinates of the pixels greater than the threshold m, and finally output the coordinates greater than the threshold m and output the positioning result map of the three-color cable area. The result is shown in Figure 7. Show;

4.对Faster R-CNN目标检测算法中的前面特征网络提取部分做了一些改进,根据三色线缆图像不复杂的特点,则对于提取特征来说网络层数的条件不用很高,为了提高模型的效率,将共享卷积层的卷积层和池化层做了一定的改进,分别删除了共享卷积层网络中conv3内的conv3_3层和conv4内的conv4_3层,在不减少准确率的前提下大大提升了检测的速度;4. Some improvements have been made to the previous feature network extraction part in the Faster R-CNN target detection algorithm. According to the uncomplicated characteristics of the three-color cable image, the conditions for the number of network layers for extracting features do not need to be very high. The efficiency of the model, the convolutional layer and the pooling layer of the shared convolutional layer have been improved to a certain extent, and the conv3_3 layer in conv3 and the conv4_3 layer in conv4 in the shared convolutional layer network are deleted respectively, without reducing the accuracy rate. Under the premise, the detection speed is greatly improved;

5.根据三色线缆图像的特点对锚框做了一些优化,设计的锚框尽可能把线缆区域包括在里面,从而更良好地完成检测任务。三色线缆图像的特点是高度比长度多一些,分析得出原始长宽比是2∶1的锚框不能恰当的对应三色线缆图像。如图8所示,将长宽比2∶1的锚框改为1∶1.5,保留了1∶1、1∶2的锚框,其中的作用一方面是为了更好地匹配本文的线缆图像,另一方面,通过实验验证能够大大节省占用的一些计算机的内存。5. According to the characteristics of the three-color cable image, the anchor frame is optimized, and the designed anchor frame includes the cable area as much as possible, so as to complete the detection task better. The characteristic of the three-color cable image is that the height is more than the length. The analysis shows that the anchor frame with the original aspect ratio of 2:1 cannot properly correspond to the three-color cable image. As shown in Figure 8, the anchor frame with an aspect ratio of 2:1 is changed to 1:1.5, and the anchor frames of 1:1 and 1:2 are retained. On the one hand, the function is to better match the cable of this article. Images, on the other hand, are experimentally verified to be able to save a lot of computer memory.

6.采用ELU激活函数代替原有的ReLU激活函数,函数图像左边有软饱和性,有助于提升噪声鲁棒性,ELU函数的特性会使得输出均值接近为零,加快收敛的速度;其中ELU函数可以激活一些神经元,它结合了Sigmoid和ReLU的特点,函数图像左边有软饱和性,有助于提升噪声鲁棒性。而ELU函数的特性会使得输出均值接近为零,加快收敛的速度。6. The ELU activation function is used to replace the original ReLU activation function. There is soft saturation on the left side of the function image, which helps to improve noise robustness. The characteristics of the ELU function will make the output mean close to zero and speed up the convergence; among them, ELU The function can activate some neurons. It combines the characteristics of Sigmoid and ReLU. The left side of the function image has soft saturation, which helps to improve noise robustness. The characteristics of the ELU function will make the output mean close to zero, speeding up the convergence.

如图9所示的ELU函数图,ELU函数表达式:The ELU function diagram shown in Figure 9, the ELU function expression:

x为神经网络节点的输入,f(x)为神经网络节点的输入输出,α是一个可调整的参数;x is the input of the neural network node, f(x) is the input and output of the neural network node, and α is an adjustable parameter;

5.采用Faster R-CNN目标检测算法进行顺序检测。见图10所示,最后,采用mAP指标来对模型的效果进行评价。为了体现本发明提供的方法的检测效果,比较了本发明提供的方法和原始目标检测算法的准确率和检测时间。表1展示了两种方法的准确率和检测时间,从表1可以看出,本发明提供的方法提高了一定的检测精度,网络消耗内存更少,具有较好的检测性能。5. Using Faster R-CNN target detection algorithm for sequential detection. As shown in Figure 10, finally, the mAP index is used to evaluate the effect of the model. In order to reflect the detection effect of the method provided by the present invention, the accuracy and detection time of the method provided by the present invention and the original target detection algorithm are compared. Table 1 shows the accuracy rate and detection time of the two methods. It can be seen from Table 1 that the method provided by the present invention improves a certain detection accuracy, consumes less network memory, and has better detection performance.

表1:线缆检测结果Table 1: Cable Test Results

方法method mAP(%)mAP(%) 检测时间(S)Detection time (S) 原始Faster R-CNNOriginal Faster R-CNN 94.3294.32 0.2840.284 提出的方法proposed method 96.5596.55 0.2360.236

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, reference to the terms "one embodiment," "some embodiments," "exemplary embodiment," "example," "specific example," or "some examples", etc., is meant to incorporate the embodiments A particular feature, structure, material, or characteristic described by an example or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。Although embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, The scope of the invention is defined by the claims and their equivalents.

Claims (7)

1. a kind of order of cables detection method based on deep learning, which comprises the following steps:
Step 1: gray processing being carried out to three colo(u)r streak cable images first, does basis for subsequent order detection;Step 2: a certain to image Capable neighbor pixel gray value carries out calculus of differences, chooses a threshold value and difference result compares, and exports positioning result figure;Step Rapid 3: the characteristics of being directed to three colo(u)r streak cable images proposes anchor frame, the ELU activation primitive pair of simplified feature extraction network and optimization Faster R-CNN is improved;Step 4: improved Faster R-CNN algorithm being subjected to order of cables detection, and counts accurate Rate and detection time.
2. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that described In step 2, in the grayscale image in three colo(u)r streak cable regions, the gray value of certain a line pixel is taken.To the pixel ash that this line is adjacent Angle value carries out calculus of differences.
3. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that described In step 2, setting threshold value is 5, counts the coordinate of the pixel greater than threshold value 5, and finally output is greater than the coordinate of threshold value 5 and defeated The positioning result figure in three colo(u)r streak cable regions out.
4. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that described The feature extraction network that step 3 simplifies, it is specific that feature is extracted using simplified shared convolutional layer network, it deletes respectively shared A convolutional layer in convolution layer network in convolutional layer 3 and convolutional layer 4.
5. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that described Some optimizations are done to anchor frame in step 3, cable region is included in the inside as far as possible by the anchor frame of design, and the anchor frame of optimization uses The anchor frame of 1:1.5,1:1,1:2 extract target area feature.
6. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that described Step 3, in FastR-CNN network, ReLU function is replaced using ELU function.
7. a kind of order of cables detection method based on deep learning according to claim 1, which is characterized in that cable figure Piece totally 4000, cable picture is divided into two pictures: 2800 are used as training set, and 1200 are used as test set.
CN201910299722.4A 2019-04-15 2019-04-15 A cable sequence detection method based on deep learning Active CN110136098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910299722.4A CN110136098B (en) 2019-04-15 2019-04-15 A cable sequence detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910299722.4A CN110136098B (en) 2019-04-15 2019-04-15 A cable sequence detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110136098A true CN110136098A (en) 2019-08-16
CN110136098B CN110136098B (en) 2023-07-18

Family

ID=67569708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910299722.4A Active CN110136098B (en) 2019-04-15 2019-04-15 A cable sequence detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110136098B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738164A (en) * 2020-06-24 2020-10-02 广西计算中心有限责任公司 Pedestrian detection method based on deep learning
CN112270668A (en) * 2020-11-06 2021-01-26 南京斌之志网络科技有限公司 Suspended cable detection method and system and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846298A (en) * 2016-12-22 2017-06-13 清华大学 The recognition methods of optical fiber winding displacement and device
CN109409272A (en) * 2018-10-17 2019-03-01 北京空间技术研制试验中心 Cable Acceptance Test System and method based on machine vision
CN109596634A (en) * 2018-12-30 2019-04-09 国网北京市电力公司 The detection method and device of electric cable stoppage, storage medium, processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846298A (en) * 2016-12-22 2017-06-13 清华大学 The recognition methods of optical fiber winding displacement and device
CN109409272A (en) * 2018-10-17 2019-03-01 北京空间技术研制试验中心 Cable Acceptance Test System and method based on machine vision
CN109596634A (en) * 2018-12-30 2019-04-09 国网北京市电力公司 The detection method and device of electric cable stoppage, storage medium, processor

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738164A (en) * 2020-06-24 2020-10-02 广西计算中心有限责任公司 Pedestrian detection method based on deep learning
CN112270668A (en) * 2020-11-06 2021-01-26 南京斌之志网络科技有限公司 Suspended cable detection method and system and electronic equipment
CN112270668B (en) * 2020-11-06 2021-09-21 威海世一电子有限公司 Suspended cable detection method and system and electronic equipment

Also Published As

Publication number Publication date
CN110136098B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN110544251B (en) Dam crack detection method based on multi-migration learning model fusion
CN109359681B (en) A method for identification of field crop diseases and insect pests based on improved fully convolutional neural network
CN111274921B (en) Method for recognizing human body behaviors by using gesture mask
CN108319972A (en) A kind of end-to-end difference online learning methods for image, semantic segmentation
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN108388905B (en) A Light Source Estimation Method Based on Convolutional Neural Network and Neighborhood Context
CN101667245B (en) Face Detection Method Based on Support Vector Novelty Detection Classifier Cascade
CN111507334B (en) An instance segmentation method based on key points
CN107038416B (en) A Pedestrian Detection Method Based on Improved HOG Feature of Binary Image
CN110663971B (en) Red date quality classification method based on double-branch deep fusion convolutional neural network
CN113780132B (en) A lane line detection method based on convolutional neural network
CN106875373A (en) Mobile phone screen MURA defect inspection methods based on convolutional neural networks pruning algorithms
CN107729819A (en) A kind of face mask method based on sparse full convolutional neural networks
CN111476710B (en) Video face changing method and system based on mobile platform
CN104573731A (en) Rapid target detection method based on convolutional neural network
CN112418330A (en) Improved SSD (solid State drive) -based high-precision detection method for small target object
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN111768401A (en) A rapid freshness classification method of chilled pomfret based on deep learning
CN106658169A (en) Universal method for segmenting video news in multi-layered manner based on deep learning
CN111860587B (en) Detection method for small targets of pictures
CN113409267B (en) Pavement crack detection and segmentation method based on deep learning
CN114926407A (en) Steel surface defect detection system based on deep learning
CN108305260A (en) Detection method, device and the equipment of angle point in a kind of image
CN111582654A (en) Service quality evaluation method and device based on deep recurrent neural network
CN113361466A (en) Multi-modal cross-directed learning-based multi-spectral target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant