WO2023040104A1 - 一种管道异物检测装置及检测方法 - Google Patents

一种管道异物检测装置及检测方法 Download PDF

Info

Publication number
WO2023040104A1
WO2023040104A1 PCT/CN2021/139703 CN2021139703W WO2023040104A1 WO 2023040104 A1 WO2023040104 A1 WO 2023040104A1 CN 2021139703 W CN2021139703 W CN 2021139703W WO 2023040104 A1 WO2023040104 A1 WO 2023040104A1
Authority
WO
WIPO (PCT)
Prior art keywords
pipeline
detection
foreign matter
foreign
vehicle
Prior art date
Application number
PCT/CN2021/139703
Other languages
English (en)
French (fr)
Inventor
郎立国
康涛
李旭
杨天兵
孙丹
Original Assignee
中航华东光电(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中航华东光电(上海)有限公司 filed Critical 中航华东光电(上海)有限公司
Publication of WO2023040104A1 publication Critical patent/WO2023040104A1/zh

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L55/00Devices or appurtenances for use in, or in connection with, pipes or pipe systems
    • F16L55/26Pigs or moles, i.e. devices movable in a pipe or conduit with or without self-contained propulsion means
    • F16L55/28Constructional aspects
    • F16L55/40Constructional aspects of the body
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L55/00Devices or appurtenances for use in, or in connection with, pipes or pipe systems
    • F16L55/26Pigs or moles, i.e. devices movable in a pipe or conduit with or without self-contained propulsion means
    • F16L55/28Constructional aspects
    • F16L55/30Constructional aspects of the propulsion means, e.g. towed by cables
    • F16L55/32Constructional aspects of the propulsion means, e.g. towed by cables being self-contained
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L2101/00Uses or applications of pigs or moles
    • F16L2101/30Inspecting, measuring or testing

Definitions

  • the invention relates to the technical field of pipeline detection, in particular to a pipeline foreign matter detection device and detection method.
  • Pipeline foreign matter refers to certain foreign substances, debris or objects left in the pipeline that affect the pipeline, such as metal tools, scattered screws, nuts, gaskets, fuses, etc. Most of them are made of metal. How to effectively and quickly remove pipeline foreign objects Detection has always been a big problem.
  • pipeline detection at home and abroad mainly relies on manpower for detection, and there are problems such as poor pipeline detection environment, unavoidable negligence, danger, and non-traceability of personnel. method to overcome the deficiencies in current practical applications.
  • the purpose of the present invention is to provide a detection device and detection method for pipeline foreign matter, so as to solve the problems raised in the above-mentioned background technology.
  • the present invention provides the following technical solutions:
  • a pipeline foreign matter detection device comprising:
  • control part is located in the detection vehicle
  • a mechanical arm one end of the mechanical arm is connected to the detection vehicle
  • an image acquisition part is connected with the other end of the mechanical arm.
  • a pipeline foreign matter detection method applied to the pipeline foreign matter detection device as described above, the method includes the following steps:
  • step 2) Continue to step 2), until the foreign object identification of the current section is completed, continue to perform step 1);
  • the inspection vehicle After the inspection of the entire pipeline is completed, the inspection vehicle exits the pipeline and completes the inspection. In order to prevent the inspection vehicle from bringing in foreign objects and leaving them in the pipeline, the inspection vehicle can perform foreign object inspection again when it retreats.
  • the detection vehicle can be set to move in the pipeline.
  • the control part is the main control of the whole system.
  • the control inspection vehicle carries the mechanical arm to run back and forth in the pipeline, and the mechanical arm circulates to different pre-calculated pose points, and then the image acquisition part performs image acquisition. Then, the control The software completes the recognition of foreign matter on the collected images through image processing. After the recognition, it prompts the detection of foreign matter through sound, image and other forms.
  • the foreign matter detection algorithm uses the deep learning method based on YoloV3 to complete the detection of foreign matter in the pipeline. The impact of light is small, and the recognition effect is good.
  • FIG. 1 is a schematic diagram of the overall structure of an embodiment of the present invention.
  • Fig. 2 is a schematic flow chart of foreign matter detection in an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of the identification and positioning principle of YoloV3 in the embodiment of the present invention.
  • Fig. 4 is a schematic diagram of the structure of the YoloV3 foreign object detection model in the embodiment of the present invention.
  • Fig. 5 is a schematic diagram of the structure of the Convolutional layer in the embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a residual layer structure in an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of an object bounding box in an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of foreign object identification and positioning in an embodiment of the present invention.
  • Fig. 9 is a schematic diagram of foreign object identification and positioning in an embodiment of the present invention.
  • Fig. 10 is a schematic diagram of the actual detection results of foreign matter in the embodiment of the present invention.
  • a pipeline foreign matter detection device provided by an embodiment of the present invention, the pipeline foreign matter detection device includes:
  • a control part 4, the control part 4 is located in the detection vehicle 1;
  • a mechanical arm 2, one end of the mechanical arm 2 is connected to the detection vehicle 1;
  • the image acquisition part 3 is connected with the other end of the mechanical arm 2.
  • the detection vehicle 1 can move in the pipeline 5 through the installation, wherein the control part 4 is the main control of the whole system, and the control detection vehicle 1 carries the mechanical arm 2 to run back and forth in the pipeline 5.
  • the mechanical arm 2 runs circularly to different pre-calculated pose points, and then the image acquisition part 3 performs image acquisition, and then the control part 4 completes the recognition of foreign objects through image processing on the collected images. , images and other forms to prompt the detection of foreign objects.
  • the foreign object detection algorithm uses the deep learning method based on Yolov3 to complete the detection of foreign objects in the pipeline, which is less affected by the light and has a good recognition effect. It can effectively and quickly detect metal foreign objects in pipeline 5, reduce the influence of poor environment during pipeline 5 detection and the inevitable negligence, danger, and untraceable problems of manual detection, which provides convenience for staff and is worth promoting .
  • the running speed of described testing car 1 is less than 50mm/s, deadweight is less than 60Kg, and load is less than 50Kg, and described testing car 1 is fixedly installed with driving wheel 6, and described
  • the drive wheel 6 is provided with a low-voltage in-wheel motor, and a control module 7 is installed on the inspection vehicle 1 .
  • the control module 7 includes a main controller, laser radar and IMU (Inertial Measurement Unit, Inertial Measurement Unit), the robot platform control system uses a high-performance industrial computer as the main controller, and performs motion control through four CAN interfaces.
  • Lidar collects environmental information
  • IMU measures the attitude of the robot mobile platform in real time
  • IMU information fusion The laser radar and laser ranging module can judge the status of the robot and the environment in real time to ensure the stability and safety of the platform.
  • the mechanical arm 2 is a six-axis cooperative mechanical arm, and the operating arm span of the mechanical arm 2 is 615mm, its own weight is 15kg, the load is 3kg, and it has 10 levels Power protection function.
  • the arm span of the robot arm 2 is required to cover the working range, and the arm span is required to be as short as possible, because the longer the arm span is, the easier it is to touch the inner wall of the pipeline.
  • the image acquisition part 3 is an industrial camera
  • the industrial camera is an area-scanning camera
  • a lens is installed on the area-scanning camera
  • a polarizer is installed on the lens piece.
  • the quality of the collected images has a direct impact on the design of the algorithm and whether the final algorithm is effective, and even determines the success or failure of the invention project.
  • the quality of image collection can be improved to the greatest possible extent.
  • the camera parameters finally selected are as follows:
  • the main parameters of the camera lens are as follows:
  • the present invention finally uses a square light source for side projection, and the light source is projected onto the pipe wall at a certain angle; and the lens Mount a polarizer on top to further prevent reflections.
  • described control part 4 is computer, and the CPU of described computer adopts i7 6 nuclear processors, internal memory is 24G, hard disk is 2TB, and graphics card adopts NVIDIA RTX 2080.
  • the CPU is selected as i7 6-core and above processors.
  • the memory is 24G or above
  • the hard disk is above 2TB
  • the graphics card is a high-performance computer with NVIDIA RTX 2080 or above.
  • a detection method for a pipeline foreign object detection device which includes the following steps:
  • step 2) Continue to step 2), until the foreign object identification of the current section is completed, continue to perform step 1);
  • the inspection vehicle After the inspection of the entire pipeline is completed, the inspection vehicle exits the pipeline and completes the inspection. In order to prevent the inspection vehicle from bringing in foreign objects and leaving them in the pipeline, the inspection vehicle can perform foreign object inspection again when it retreats.
  • step 4 the foreign matter detection can be completed under different lighting conditions, the foreign matter is mainly some metal objects, and it is completed by YoloV3 foreign matter detection, and the YoloV3 algorithm uses a separate The CNN model realizes end-to-end target detection, and directly predicts the category and position of the target for the input image.
  • YoloV3 only uses a CNN network to directly predict the categories and positions of different targets.
  • the structure is simple and fast, and the processing speed can reach 29 frames per second, which can achieve real-time processing.
  • YoloV3 supports detection of 3 different scales, and it is also relatively small for small target detection. excellent.
  • the CNN network of YoloV3 divides the input picture into an S ⁇ S grid (actually, the S ⁇ S grid is obtained by convolution downsampling, here, for the convenience of explanation, it is described as dividing the input picture into an S ⁇ S grid) , and then each cell is responsible for detecting the targets whose center points fall in the grid, as shown in Figure 6, it can be seen that the center of the hexagonal target falls in the orange cell in the figure, then the cell is responsible for predicting This hexagon.
  • Each cell will predict B (YoloV3 has obtained 3 sets of prior frames through clustering algorithm, that is, 3 sets of default preset bounding boxes obtained by pre-training, which are called anchor points, so the value of B is 3) presets
  • the predicted value of the bounding box (t x ,t y ,t w ,t h ,p O ,p 1 ,p 2 ,-...-,p c ), where (t x ,t y ,t w ,t h ) is The size and position of the predicted bounding box (actually the center offset and the aspect ratio), p O is the probability of containing the target in the predicted target bounding box, (p 1 ,p 2 ,...,p c ) is the predicted boundary
  • the boxes correspond to the probabilities of the c target classes.
  • Each cell needs to predict (B ⁇ (5+C)) values. If the input picture is divided into an S ⁇ S grid, then the final predicted value is a tensor of size S ⁇ S ⁇ (B ⁇ (5+C)).
  • YoloV3 predicts at three scale levels by downsampling the size of the input image by 32, 16, and 8 respectively.
  • Realization that is, the picture is divided into three grids. If the input image size is 416*416, it is divided into three kinds of feature maps, that is, three S ⁇ S grids, which are 13 ⁇ 13 grid, 26 ⁇ 26 grid, 52 ⁇ 52 grid, of which 13 ⁇ 13 grid can identify targets with relatively large imaging size, 26 ⁇ 26 grid can identify targets with medium imaging size, and 52 ⁇ 52 grid can identify targets with relatively small imaging size.
  • the targets to be measured are scattered screws, nuts, gaskets, and fuses, a total of 4 categories, plus the background of the pipe wall surface, so the target category is (4+1), a total of 5 categories, that is, c is 5 , the 13 ⁇ 13 grid prediction value is a tensor of size 13 ⁇ 13 ⁇ (3 ⁇ (5+5)), and the 26 ⁇ 26 grid prediction value is a tensor of size 13 ⁇ 13 ⁇ (3 ⁇ (5+5)). Tensor, the 52 ⁇ 52 grid prediction value is a tensor of size 52 ⁇ 52 ⁇ (3 ⁇ (5+5)).
  • YoloV3 analyzes all the above predicted value probabilities to obtain the position information and size information of the final target.
  • YoloV3 adopts the network structure of Darknet-53.
  • the model structure is shown in Figure 4.
  • the network is mainly composed of a series of 1x1 and 3x3 Convolutional layers and Residual layers.
  • the Convolutional layer is the basic unit of the Darknet-53 network, which consists of a conv convolutional layer, a BN layer, and a LeakyReLU layer. Its structure is shown in Figure 5.
  • the Residual layer is the darknet-53 residual module, and its structure is shown in Figure 6.
  • the benefits of using the residual structure here: (1) A key point of the deep model is whether it can converge normally, and the use of the residual structure can ensure the network structure In very deep cases, it can still converge. (2) The deeper the network, the better the features expressed, which can improve the effect of target recognition and positioning.
  • the 61st and 85th layers are tensor spliced
  • the 36th and 97th layers are tensor spliced
  • the 61st layer Because the 36th and 36th layers are shallow features, the 85th and 97th layers are deep features.
  • both deep and shallow features are used to further improve the effect of the network.
  • the network finally predicts three kinds of feature maps, respectively 13 ⁇ 13 grid feature map, 26 ⁇ 26 grid feature map, and 52 ⁇ 52 grid feature map.
  • the 13 ⁇ 13 grid feature map Predicted tensor size is 13 ⁇ 13 ⁇ (3 ⁇ (5+5)
  • 26 ⁇ 26 grid feature map predicted tensor size is 26 ⁇ 26 ⁇ (3 ⁇ (5+5))
  • 52 ⁇ 52 grid Feature map prediction The prediction tensor size is 52 ⁇ 52 ⁇ (3 ⁇ (5+5)). Then the final predicted value is obtained according to the maximum value of the probability.
  • YoloV3 gets the final predicted value (t x ,t y ,t w ,t h ,p O ,p,p,-...-,-p c ), because the obtained (t x ,ty ,t w , t h ) is actually the center offset of the bounding box predicted by the network and the aspect ratio, so it is also necessary to calculate the target bounding box.
  • the calculation principle is shown in Figure 7.
  • the dotted-line rectangle in the figure is the preset bounding box, that is, the anchor point, and the solid-line rectangle is the predicted target bounding box calculated by calculating the offset predicted by the network.
  • YoloV3 foreign object recognition and positioning is divided into two parts: training model and target recognition and positioning. The process is shown in Figure 8.
  • Training model pre-collect the target images to be tested (in order to improve the recognition effect, try to collect target images under different lighting conditions, different background conditions, different angles, and different distances, and collect as many images as possible, preferably not less than 10,000), and then carry out Train to get the trained model features.
  • Target recognition and positioning first read in the corrected image, convert the image resolution to 416x416, then read the model features, perform target recognition and positioning, and finally get the target type and position.
  • the positioning effect diagram is shown in Figure 9.
  • the input image INPUT_IMG target is recognized and positioned, and the target TAG_IMG is identified and positioned.
  • the actual foreign object recognition result As shown in Figure 10.
  • step 5 after the foreign object is detected and recognized, the position of the foreign object in the pipeline is determined in combination with the position of the detection vehicle, the foreign object information is prompted to the operating user, and the original image and recognition result are saved. In order to facilitate subsequent backtracking, it provides convenience to users.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

一种管道异物检测装置及检测方法,所述管道异物检测装置包括:检测车(1);控制件(4),所述控制件位于所述检测车内;机械臂(2),所述机械臂的一端与所述检测车相连接;以及图像采集件(3),所述图像采集件与所述机械臂的另一端相连接;该检测装置及检测方法可以有效快速地针对管道中的金属类异物进行检测,减少管道检测时环境差的影响和人力检测不可避免的疏忽性、危险性、不可追溯等问题,给工作人员提供了便利。

Description

一种管道异物检测装置及检测方法 技术领域
本发明涉及管道检测技术领域,具体是一种管道异物检测装置及检测方法。
背景技术
管道异物是指遗留在管道道内,影响管道的某种外来物质、碎屑或物体,如金属工具、散落的螺钉、螺帽、垫片、保险丝等,以金属材质居多,如何有效快速地管道异物检测一直是一个很大的难题。
目前国内外管道检测主要依靠人力进行检测,存在着管道检测环境差、人员不可避免的疏忽性、危险性、不可追溯等问题,因此,针对以上现状,迫切需要开发一种管道异物检测装置及检测方法,以克服当前实际应用中的不足。
发明内容
本发明的目的在于提供一种管道异物检测装置及检测方法,以解决上述背景技术中提出的问题。
为实现上述目的,本发明提供如下技术方案:
一种管道异物检测装置,所述管道异物检测装置包括:
检测车;
控制件,所述控制件位于所述检测车内;
机械臂,所述机械臂的一端与所述检测车相连接;
以及图像采集件,所述图像采集件与所述机械臂的另一端相连接。
一种管道异物检测方法,应用于如上所述的管道异物检测装置,该方法包括如下步骤:
1)检测车按照设定运行设置距离D;
2)机械臂运行到某一指定路点;
3)相机采集图像;
4)采集完图像后,进行异物识别;
5)如果检测到异物,进行异物提示;
6)继续步骤2),直至当前一个截面的异物识别完成后,继续执行步骤1);
7)整个管道检测完成后,检测车退出管道,完成检测,为了防止检测车会带入异物,遗漏在管道内,检测车后退时可以再次进行异物检测。
与现有技术相比,本发明的有益效果是:
当需要进行异物检测工作时,通过设置的检测车,可在管道内运动。其中,控制件是整个系统的主控,控制检测车携带着机械臂在管道内前后运行,机械臂循环运行到预先计算好的不同的位姿点,然后图像采集件进行图像采集,然后,控制件对采集的图像通过图像处理的方法完成异物的识别,识别后,并通过声音、图像等形式提示检测到异物,其中,异物检测算法采用基于YoloV3的深度学习方法完成对管道异物的检测,受光照影响较小,且识别效果好,通过检测车在管道内往复运动,可以有效快速地针对管道中的金属类异物进行检测,减少管道检测时环境差的影响和人力检测不可避免的疏忽性、危险性、不可追溯等问题,给工作人员提供了便利,值得推广。
附图说明
图1为本发明实施例中整体的结构示意图。
图2为本发明实施例中异物检测的流程示意图。
图3为本发明实施例中YoloV3的识别定位原理示意图。
图4为本发明实施例中YoloV3异物检测模型结构示意图。
图5为本发明实施例中Convolutional层结构示意图。
图6为本发明实施例中Residual层结构示意图。
图7为本发明实施例中目标边界框示意图。
图8为本发明实施例中异物识别定位的流程示意图。
图9为本发明实施例中异物识别定位示意图。
图10为本发明实施例中异物实际检测结果示意图。
图中:1-检测车,2-机械臂,3-图像采集件,4-控制件,5-管道,6-驱动轮,7-控制模块。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
以下结合具体实施例对本发明的具体实现进行详细描述。
请参阅图1,本发明实施例提供的一种管道异物检测装置,所述管道异物检测装置包括:
检测车1;
控制件4,所述控制件4位于所述检测车1内;
机械臂2,所述机械臂2的一端与所述检测车1相连接;
以及图像采集件3,所述图像采集件3与所述机械臂2的另一端相连接。
当需要进行异物检测工作时,通过设置的检测车1,可在管道5内运动,其中,控制件4是整个系统的主控,控制检测车1携带着机械臂2在管道5内前后运行,机械臂2循环运行到预先计算好的不同的位姿点,然后图像采集件3进行图像采集,然后,控制件4对采集的图像通过图像处理的方法完成异物的识别,识别后,并通过声音、图像等形式提示检测到异物,其中,异物检测算法采用基于Yolov3的深度学习方法完成对管道异物的检测,受光照影响较小,且识别效果好,通过检测车1在管道5内往复运动,可以有效快速地针对管道5中的金属类异物进行检测,减少管道5检测时环境差的影响和人力检测不可避免的疏忽性、危险性、不可追溯等问题,给工作人员提供了便利,值得推广。
在本发明的一个实施例中,请参阅图1,所述检测车1的运行速度小于50mm/s,自重小于60Kg,负载小于50Kg,所述检测车1上固定安装有驱动轮6,所述驱动轮6内设有低压轮毂电机,所述检测车1上还安装有控制模块7。
在检测时,为保证检测完全,检测车运行速度不能太高,运行时不能有滑移,需要转向控制简单,整车体积尽量小,检测车1的驱动采用四轮差动驱动方式,驱动轮6采用低 压轮毂电机独立驱动,具有低速大力矩并能节省机械安装空间的特点,电机采用四路增量型光栅编码器作为位置和速度反馈,控制模块7包括有主控制器、激光雷达和IMU(Inertial Measurement Unit,惯性测量单元),机器人平台控制系统采用高性能工控机作为主控制器,通过四路CAN接口进行运动控制,激光雷达采集环境信息,IMU实时测量机器人移动平台姿态,IMU信息融合激光雷达、激光测距模块,对机器人状态以及环境实时判别,确保平台稳定与安全。
在本发明的一个实施例中,请参阅图1,所述机械臂2为六轴协作机械臂,且所述机械臂2的运行臂展为615mm,自重15kg,负载3kg,并带有10级力保护功能。
在检测工作过程中,机械臂2既要求臂展能够覆盖工作范围,又要求臂展尽可能短,因为臂展越长,越容易碰到管道内壁,通过计算机械臂2的负载、对机械臂2的工作范围进行了模拟,采用臂展为615mm,自重15kg,负载3Kg,并且带有10级力保护功能的六轴协作机械臂,在碰撞到管壁时能及时停止,防止撞伤管壁。
在本发明的一个实施例中,请参阅图1,所述图像采集件3为工业相机,所述工业相机为面扫相机,所述面扫相机上安装有镜头,所述镜头上安装有偏振片。
在检测时,由于采集的图像质量高低对算法的设计以及最终算法是否有效有着直接的影响,甚至会决定发明项目的成败,通过相机的选型、光源设计最大可能地提高图像采集的质量。考虑到深度图像处理算法需要彩色相机,最终选择的相机参数如下:
a)、分辨率:2448(H)×2048(V)
b)、传感器尺寸:2/3”
c)、像元尺寸:3.45μm x 3.45μm
帧率:20fps
镜头接口:C口
相机镜头的主要参数如下:
焦距:6mm
光圈范围:F1.4-F16
接口:C-mount
感应尺寸:1”
因为管道内壁是金属材质,且有的表面是有弧度的,为了避免管道内壁产生的反射光直接进入相机,本发明最终采用方形光源进行侧面投射,光源以一定角度投射到管壁面上;并且镜头上安装偏振片,进一步防止反射。
在本发明的一个实施例中,请参阅图1,所述控制件4为计算机,所述计算机的CPU采用i7 6核处理器,内存为24G,硬盘为2TB,显卡采用NVIDIA RTX 2080。
在检测过程中,要对采集的图像通过图像处理方法完成异物的检测识别,因此,计算机需要进行深度学习图像处理,并且保存管道异物检测过程图像,所以选用CPU为i7 6核及以上处理器,内存24G及以上,硬盘2TB以上,显卡为NVIDIA RTX 2080及以上的高性能计算机。
基于上述实施例给出的结构,此处给出一种管道异物检测装置的检测方法,该方法包括如下步骤:
1)检测车按照设定运行设置距离D;
2)机械臂运行到某一指定路点;
3)相机采集图像;
4)采集完图像后,进行异物识别;
5)如果检测到异物,进行异物提示;
6)继续步骤2),直至当前一个截面的异物识别完成后,继续执行步骤1);
7)整个管道检测完成后,检测车退出管道,完成检测,为了防止检测车会带入异物,遗漏在管道内,检测车后退时可以再次进行异物检测。
在本发明的一个实施例中,请参阅图2-10,在步骤4中,异物检测可以在不同光照条件下完成,异物主要是一些金属物体,通过YoloV3异物检测来完成,YoloV3算法采用单独的CNN模型实现端到端的目标检测,对输入图像直接预测目标的类别与位置。
YoloV3仅使用一个CNN网络直接预测不同目标的类别与位置,结构简单速度快,处理 速度可以达到29帧/秒,可以达到实时处理,且YoloV3支持3个不同尺度的检测,对于小目标检测也比较优秀。
YoloV3的CNN网络将输入的图片分割成S×S网格(实际上是通过卷积下采样得到S×S网格,这里为了便于说明,描述为将输入的图片分割成S×S网格),然后每个单元格负责去检测那些中心点落在该格子内的目标,如图6所示,可以看到六角形这个目标的中心落在图中橙色单元格内,那么该单元格负责预测这个六角形。每个单元格会预测B(YoloV3通过聚类算法得到了3组先验框,即预训练得到的3组默认预设边界框,称之为锚点,所以B取值为3)个预设边界框的预测值(t x,t y,t w,t h,p O,p 1,p 2,-…-,p c),其中(t x,t y,t w,t h)为预测边界框的大小与位置(实际上为中心偏移量以及宽高缩放比),p O为预测目标边界框内包含目标的概率,(p 1,p 2,…,p c)为预测边界框对应c个目标类别的概率。
每个单元格需要预测(B×(5+C))个值。如果将输入图片划分为S×S网格,那么最终预测值为S×S×(B×(5+C))大小的张量。
因为目标物体尺寸有大有小,为了能够更好地识别定位大尺寸和小尺寸的成像目标,YoloV3在三个尺度等级上进行预测,通过分别将输入图像的尺寸下采样32、16、8来实现,即将图片划分为三种网格,如果输入图像尺寸为416*416,划分为三种特征图即三种S×S的网格,分别为13×13网格、26×26网格、52×52网格,其中13×13网格可以识别成像尺寸比较大的目标,26×26网格可以识别成像尺寸中等的目标,52×52网格可以识别成像尺寸比较小的目标。
检测工作中,假设需要测量的目标为散落的螺钉、螺帽、垫片、保险丝共4类,加上管壁表面背景类,所以目标类别为(4+1)共5类,即c为5,13×13网格预测值为13×13×(3×(5+5))大小的张量,26×26网格预测值为13×13×(3×(5+5))大小的张量,52×52网格预测值为52×52×(3×(5+5))大小的张量,最后YoloV3分析上面所有的预测值概率,得到最终目标的位置信息与大小信息。
YoloV3采用Darknet-53的网络结构,模型结构如图4所示,网络主要是由一系列的 1x1和3x3的Convolutional层以及Residual层组成。其中,Convolutional层是Darknet-53网络基本单元,由conv卷积层、BN层、LeakyReLU层组成,其结构如图5所示。Residual层为darknet-53残差模块,其结构如图6所示,这里使用残差的结构的好处:(1)深度模型一个关键的点就是能否正常收敛,使用残差结构能保证网络结构在很深的情况下,仍能收敛。(2)网络越深,表达的特征越好,可以提升目标识别定位的效果。
从图7中可以看到,在第86层,将第61层和第85层进行了张量拼接,在第98层,将第36层和第97层进行了张量拼接,其中第61层为和第36层为浅层特征,第85层和第97层为深层特征,这里同时利用了深层和浅层特征,进一步提高了网络的效果。网络最后预测三种特征图,分别13×13网格特征图、26×26网格特征图、52×52网格特征图,我们这里只需要识别一种目标,所以13×13网格特征图预测张量尺寸为13×13×(3×(5+5)),26×26网格特征图预测张量尺寸为26×26×(3×(5+5)),52×52网格特征图预测预测张量尺寸为52×52×(3×(5+5))。然后根据概率最大值得到最终预测值。
YoloV3得到最终预测值(t x,t y,t w,t h,p O,p,p,-…-,-p c)后,因为得到的(t x,t y,t w,t h)实际上为网络预测的边界框中心偏移量以及宽高缩放比,所以还需要计算出目标边界框。计算原理如图7所示,图中虚线矩形框为预设边界框即锚点,实线矩形框为通过网络预测的偏移量计算得到的预测目标边界框。其中-(p w,p h)为预设边界框在特征图上的宽和高,(t x,t y,t w,t h)分别为网络预测的边界框中心偏移量以及宽高缩放比,(b x,b y,b w,b h)为最终预测的目标边界框。
YoloV3异物识别定位流程分为训练模型和目标识别定位两个部分,其流程如图8所示。
训练模型:预先采集待测目标图像(为了提高识别效果,尽量采集不同光照条件、不同背景条件、不同角度、不同距离的目标图像,图像采集尽量多,最好不少于10000张),然后进行训练,得到训练后的模型特征。
目标识别定位:首先读入经过校正的图像,图像分辨率转换为416x416,然后读取模型特征,进行目标识别定位,最后得到目标类型与位置。
利用YoloV3定位识别算法对目标进行识别定位,输入原图像即可通过算法获得目标图像在原图中的位置信息及宽度高度信息。定位效果图如图9所示,对输入图像INPUT_IMG目标识别定位,识别定位出目标TAG_IMG,在原图INPUT_IMG中的位置(x lta y lta),宽度为w lta,高度为h lta,实际异物识别结果如图10所示。
在本发明的一个实施例中,在步骤5中,检测到异物并识别完成后,结合检测车的位置,确定异物在管道的位置,提示给操作用户异物信息,并且保存原始图像及识别结果,以便于后续回溯,给使用者提供了便利。
需要说明的是,在本发明中,除非另有明确的规定和限定,术语“滑动”、“转动”、“固定”、“设有”等术语应做广义理解,例如,可以是焊接连接,也可以是螺栓连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
此外,应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施例中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。

Claims (8)

  1. 一种管道异物检测装置,其特征在于,所述管道异物检测装置包括:
    检测车;
    控制件,所述控制件位于所述检测车内;
    机械臂,所述机械臂的一端与所述检测车相连接;
    以及图像采集件,所述图像采集件与所述机械臂的另一端相连接。
  2. 根据权利要求1所述的管道异物检测装置,其特征在于,所述检测车的运行速度小于50mm/s,自重小于60Kg,负载小于50Kg,所述检测车上固定安装有驱动轮,所述驱动轮内设有低压轮毂电机,所述检测车上还安装有控制模块。
  3. 根据权利要求2所述的管道异物检测装置,其特征在于,所述机械臂为六轴协作机械臂,且所述机械臂的运行臂展为615mm,自重15kg,负载3kg,并带有10级力保护功能。
  4. 根据权利要求1-3任一项所述的管道异物检测装置,其特征在于,所述图像采集件为工业相机,所述工业相机为面扫相机,所述面扫相机上安装有镜头,所述镜头上安装有偏振片。
  5. 根据权利要求4所述的管道异物检测装置,其特征在于,所述控制件为计算机,所述计算机的CPU采用i7 6核处理器,内存为24G,硬盘为2TB,显卡采用NVIDIA RTX 2080。
  6. 一种管道异物检测方法,其特征在于,应用于如权利要求1~5中任一项权利要求所述的管道异物检测装置,该方法包括如下步骤:
    1)检测车按照设定运行设置距离D;
    2)机械臂运行到某一指定路点;
    3)相机采集图像;
    4)采集完图像后,进行异物识别;
    5)如果检测到异物,进行异物提示;
    6)继续步骤2),直至当前一个截面的异物识别完成后,继续执行步骤1);
    7)整个管道检测完成后,检测车退出管道,完成检测,为了防止检测车会带入异物,遗漏在管道内,检测车后退时可以再次进行异物检测。
  7. 根据权利要求6所述的管道异物检测方法,其特征在于,在步骤4中,异物检测可以在不同光照条件下完成,异物是一些金属物体,通过YoloV3异物检测来完成,YoloV3算法采用单独的CNN模型实现端到端的目标检测,对输入图像直接预测目标的类别与位置。
  8. 根据权利要求7所述的管道异物检测方法,其特征在于,在步骤5中,检测到异物并识别完成后,结合检测车的位置,确定异物在管道的位置,提示给操作用户异物信息,并且保存原始图像及识别结果。
PCT/CN2021/139703 2021-09-15 2021-12-20 一种管道异物检测装置及检测方法 WO2023040104A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111078516.4A CN113700978A (zh) 2021-09-15 2021-09-15 一种管道异物检测装置及检测方法
CN202111078516.4 2021-09-15

Publications (1)

Publication Number Publication Date
WO2023040104A1 true WO2023040104A1 (zh) 2023-03-23

Family

ID=78660459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139703 WO2023040104A1 (zh) 2021-09-15 2021-12-20 一种管道异物检测装置及检测方法

Country Status (2)

Country Link
CN (1) CN113700978A (zh)
WO (1) WO2023040104A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468729A (zh) * 2023-06-20 2023-07-21 南昌江铃华翔汽车零部件有限公司 一种汽车底盘异物检测方法、系统及计算机
CN117825176A (zh) * 2024-03-06 2024-04-05 江苏华冶科技股份有限公司 一种辐射管压力测试装备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113700978A (zh) * 2021-09-15 2021-11-26 中航华东光电(上海)有限公司 一种管道异物检测装置及检测方法
CN114877165A (zh) * 2022-06-21 2022-08-09 中国十七冶集团有限公司 一种市政工程地下管道四足机器人管道检测装置及方法
CN117386931B (zh) * 2023-12-11 2024-02-13 北京热力智能控制技术有限责任公司 一种地下管网泄漏检测装置及分析系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103615663A (zh) * 2013-11-11 2014-03-05 江苏师范大学 管道内况检测机器人
CN103968187A (zh) * 2014-04-04 2014-08-06 中国石油大学(华东) 一种微型管道机器人
CN109268618A (zh) * 2018-09-19 2019-01-25 陈东 管道内况检测机器人
CN110145692A (zh) * 2019-05-23 2019-08-20 昆山华新建设工程有限公司 污水管道cctv检测系统及方法
US20190285555A1 (en) * 2018-03-15 2019-09-19 Redzone Robotics, Inc. Image Processing Techniques for Multi-Sensor Inspection of Pipe Interiors
CN112303375A (zh) * 2020-10-29 2021-02-02 张梅 一种管道检测机器人
CN112394730A (zh) * 2020-11-14 2021-02-23 上海源正科技有限责任公司 管道检测装置
CN113700978A (zh) * 2021-09-15 2021-11-26 中航华东光电(上海)有限公司 一种管道异物检测装置及检测方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019104284A1 (de) * 2019-02-20 2020-08-20 Axel Spering Vorrichtung und Verfahren zum Kartieren eines Einlaufs
CN110455808A (zh) * 2019-07-23 2019-11-15 上海航天精密机械研究所 适用于管件内部质量检测的智能质检系统及方法
CN110751206A (zh) * 2019-10-17 2020-02-04 北京中盾安全技术开发公司 一种多目标智能成像与识别装置及方法
CN111507391A (zh) * 2020-04-13 2020-08-07 武汉理工大学 一种基于机器视觉的有色金属破碎料智能识别方法
CN111753702A (zh) * 2020-06-18 2020-10-09 上海高德威智能交通系统有限公司 目标检测方法、装置及设备
CN212643908U (zh) * 2020-07-29 2021-03-02 浙江大学城市学院 一种可涉水的管道检测车
CN113280209B (zh) * 2021-06-09 2023-02-14 中航华东光电(上海)有限公司 检测管道多余物的系统以及系统的使用方法、检测方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103615663A (zh) * 2013-11-11 2014-03-05 江苏师范大学 管道内况检测机器人
CN103968187A (zh) * 2014-04-04 2014-08-06 中国石油大学(华东) 一种微型管道机器人
US20190285555A1 (en) * 2018-03-15 2019-09-19 Redzone Robotics, Inc. Image Processing Techniques for Multi-Sensor Inspection of Pipe Interiors
CN109268618A (zh) * 2018-09-19 2019-01-25 陈东 管道内况检测机器人
CN110145692A (zh) * 2019-05-23 2019-08-20 昆山华新建设工程有限公司 污水管道cctv检测系统及方法
CN112303375A (zh) * 2020-10-29 2021-02-02 张梅 一种管道检测机器人
CN112394730A (zh) * 2020-11-14 2021-02-23 上海源正科技有限责任公司 管道检测装置
CN113700978A (zh) * 2021-09-15 2021-11-26 中航华东光电(上海)有限公司 一种管道异物检测装置及检测方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468729A (zh) * 2023-06-20 2023-07-21 南昌江铃华翔汽车零部件有限公司 一种汽车底盘异物检测方法、系统及计算机
CN116468729B (zh) * 2023-06-20 2023-09-12 南昌江铃华翔汽车零部件有限公司 一种汽车底盘异物检测方法、系统及计算机
CN117825176A (zh) * 2024-03-06 2024-04-05 江苏华冶科技股份有限公司 一种辐射管压力测试装备
CN117825176B (zh) * 2024-03-06 2024-05-28 江苏华冶科技股份有限公司 一种辐射管压力测试装备

Also Published As

Publication number Publication date
CN113700978A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
WO2023040104A1 (zh) 一种管道异物检测装置及检测方法
CN106503653B (zh) 区域标注方法、装置和电子设备
CN102467821B (zh) 基于视频图像的路面距离检测方法及装置
CN103455144A (zh) 车载人机交互系统及方法
CN105547635A (zh) 一种用于风洞试验的非接触式结构动力响应测量方法
CN102476619A (zh) 用于检测机动车周围环境的方法
CN201837374U (zh) 一种三维信息自动快速检测仪
WO2023092870A1 (zh) 一种适用于自动驾驶车辆的挡土墙检测方法及系统
CN112388606A (zh) 一种风力发电机中螺栓状态的检测方法及检测装置
EP2926317B1 (en) System and method for detecting pedestrians using a single normal camera
CN115909092A (zh) 轻量化输电通道隐患测距方法及隐患预警装置
CN113192057A (zh) 目标检测方法、系统、设备及存储介质
Jeong et al. Point cloud segmentation of crane parts using dynamic graph CNN for crane collision avoidance
Zhang et al. Detection algorithm of takeover behavior of automatic vehicles’ drivers based on deep learning
CN112363501A (zh) 无人驾驶扫地车的避障方法、装置、系统及存储介质
Katsamenis et al. Real time road defect monitoring from UAV visual data sources
CN215706365U (zh) 一种基于3d图像识别的轨道感应板异物检测装置
Unger et al. Multi-camera bird’s eye view perception for autonomous driving
CN106875427B (zh) 一种机车蛇行运动监测方法
Unnikrishnan et al. A conical laser light-sectioning method for navigation of autonomous underwater vehicles for internal inspection of pipelines
CN115327571A (zh) 一种基于平面激光雷达的三维环境障碍物检测系统及方法
CN112113605A (zh) 基于激光slam与视觉的电缆损坏检测方法及装置
Timofejevs et al. Algorithms for computer vision based vehicle speed estimation sensor
Song et al. Design of cable detection robot and image detection method
Yusuf et al. GPU Implementation for Automatic Lane Tracking in Self-Driving Cars

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21957368

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE