CN117969554A - Pipeline defect detection robot and detection method - Google Patents

Pipeline defect detection robot and detection method Download PDF

Info

Publication number
CN117969554A
CN117969554A CN202410109184.9A CN202410109184A CN117969554A CN 117969554 A CN117969554 A CN 117969554A CN 202410109184 A CN202410109184 A CN 202410109184A CN 117969554 A CN117969554 A CN 117969554A
Authority
CN
China
Prior art keywords
pipeline
image
robot
defect detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410109184.9A
Other languages
Chinese (zh)
Inventor
李明辉
李嘉伟
晏润冰
张奇
李渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi University of Science and Technology
Original Assignee
Shaanxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi University of Science and Technology filed Critical Shaanxi University of Science and Technology
Priority to CN202410109184.9A priority Critical patent/CN117969554A/en
Publication of CN117969554A publication Critical patent/CN117969554A/en
Pending legal-status Critical Current

Links

Abstract

A pipeline defect detection robot and a detection method thereof comprise a mechanical system, a control system, an image recognition system and a pipeline defect detection system; the mechanical system is used for providing power, protection, structural support and shooting for the movement of the robot; the control system transmits pictures in real time, so that an operator can plan a path conveniently, and the transmission of the pictures and the remote control of the mechanical system, the image recognition system and the pipeline defect detection system by the upper computer are realized through cables; the image recognition system is used for importing images acquired by the pipeline robot into a computer to detect through an algorithm and judging whether the pipeline is defective or not; the pipeline defect detection system automatically generates a corresponding pipeline defect detection report by importing the video image in the pipeline identified by the image identification system. The invention utilizes the visual technology to identify the defects of the pipeline and automatically detect the defects, and performs image acquisition, identification and classification on the defects in the pipeline, thereby solving the problem of manual operation caused by the severe environment in the pipeline.

Description

Pipeline defect detection robot and detection method
Technical Field
The invention belongs to the technical field of pipeline detection, and particularly relates to a pipeline defect detection robot and a detection method.
Background
Under the condition that the sewage pipeline is not excavated, the damage of the sewage pipeline is urgently needed to be detected so as to eliminate hidden danger. According to preliminary investigation, at present, 60% of underground pipeline maintenance in China depends on manual work, 30% adopts imported equipment, and domestic equipment only accounts for 10%. Because the environment in the pipeline is bad, the pipeline used for a long time can also have dangerous situations such as cataract, poisoning, electric leakage and the like, so that the manual operation efficiency is low, and the risk is also high. Thus, urban underground pipeline inspection robots have been developed. The pipeline detection robot has the advantages of small and simple structure, adaptability to different pipeline diameters, stable operation, good control performance and high detection efficiency, and can replace manual work to enter the actual pipeline for internal work.
At present, the mainstream mode of pipeline detection is that the inspector gets into pipeline inspection, and this kind of mode not only inefficiency has the security hidden danger moreover, and detectable pipeline defect scope is less, is difficult to carry out on a large scale pipeline detection. In addition, a periscope method is used for realizing pipeline detection, but because of the limitation of equipment, the main detection object of the pipeline periscope method is pipe orifice defects, the detection effect for fine structural defects and functional defects is poor, and only preliminary detection and evaluation can be carried out; the defect image of the pipeline is usually judged by professional inspectors at present, and a large amount of pipeline videos are detected manually, so that the defects of easy fatigue, strong subjectivity, low efficiency and the like exist.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a pipeline defect detection robot and a detection method, which can identify pipeline defects by using a visual technology and automatically detect the pipeline defects, can acquire images and identify and classify the defects in the pipeline, and solve the problem that the severe environment in the pipeline causes a plurality of inconveniences to manual operation.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a pipeline defect detection robot comprises a mechanical system, a control system, an image recognition system and a pipeline defect detection system;
The mechanical system is used for providing power, protection, structural support and shooting for the movement of the robot;
The control system transmits pictures in real time, so that an operator can conveniently plan a path, and the transmission of the pictures and the remote control of a mechanical system, an image recognition system and a pipeline defect detection system by an upper computer are realized through cables;
The image recognition system is used for acquiring an image shot by the mechanical system, then guiding the image into a computer for detection through an algorithm, so as to judge whether the pipeline is defective;
the pipeline defect detection system automatically generates a corresponding pipeline defect detection report by importing the video image in the pipeline identified by the image identification system.
The mechanical system comprises a supporting device, a driving device and a shooting device; the supporting device is used for protecting and structurally supporting the robot and is designed to be square and comprises a top plate and a bottom plate, the top plate and the bottom plate are connected through bolts through penetrating nails to form a hollow framework, an electric control system device is placed in the middle of the hollow framework, and the bottom of the bottom plate is connected with the control device through a set screw to ensure the reliability of the robot in the moving process;
The driving devices are arranged on two sides of the supporting device and comprise driving wheels, and the driving wheels adopt Mecanum wheels as driving wheels of the pipeline detection robot, so that omnidirectional flexible movement is realized;
the driving device selects motor driving as a driving mode of the pipeline detection robot, and the driving device provides advancing power for the pipeline robot during actual working.
The shooting device is arranged in front of the bottom plate of the supporting device and is a visual sensor and used for detecting defect information of the pipeline.
The control system is used for controlling the working condition of the pipeline robot in real time, and various control elements are placed on the bottom plate; the control system is divided into a motor driver module, an image acquisition module and a light source module;
the motor driving module drives a driving device of the robot through a motor;
the image acquisition module is used for acquiring images of the internal information of the pipeline and sending the acquired image information to the image recognition system for recognition and classification;
the light source module analyzes the image information through the upper computer, so that signals are sent to the light source module, and a light source is generated.
The control system is used for transmitting the requirements of pictures in real time so as to facilitate an operator to plan a path, a pipeline robot control interface is developed in the Ubuntu 20.04 system of the upper computer so as to control the working condition of the pipeline robot in real time, the STM32F407 singlechip is used as a lower computer to process an instruction control driving module sent by the upper computer so as to realize the running of the robot, and the transmission of the pictures and the remote control of the upper computer on the robot are realized through cables.
The image recognition system comprises a pipeline defect detection algorithm module of YOLO-TD;
abnormal viewing angles occur due to severe camera shake caused by pipeline topography change, large-amplitude deflection controlled by human, and the like, and the viewing angles generally occur in a video segment in a transient state, and have a high probability of being identified as abnormal frames with defects.
The image recognition system filters a false abnormal frame caused by the defects of a non-pipeline by using a pipeline defect detection algorithm module of YOLO-TD to detect the false abnormal frame;
the pseudo-defect image filtering is to monitor the pose change of a camera of a mechanical system shooting device by utilizing an improved Lucas-Kanade algorithm, and remove video frames with the deviation amplitude larger than a certain threshold value from a predicted sequence, so that the over-detection rate can be effectively reduced. Meanwhile, the strategy of video self consistency does not lead to defect missed detection.
The method comprises the steps of detecting characteristic points of a current video frame, tracking the detected characteristic points by utilizing an optical flow principle, and monitoring pose changes of a camera by utilizing the moving distance and direction of the characteristic points;
the feature point detection is to select a plurality of points on an image as angular points for the image in the pipeline transmitted back by the image acquisition module in real time, select the angular points as feature points to be tracked, place a small window around the assumed angular points, and observe the average change of the intensity value in a certain direction in the window.
Further, the corner points are pixel points with higher average intensity variation values in multiple directions;
assuming a displacement vector of (μ, γ), the average intensity value change is:
I (x, y) -the gray scale of the pixels of the image in the pipeline;
W (x, y) -the corresponding pixel coordinate locations within the window determined by the pipeline interior image;
taylor expansion of formula (1) gives formula (2), and conversion of formula (2) into matrix form gives formula (3):
The middle matrix is recorded as M, and two characteristic values of the matrix respectively represent the maximum average intensity value change and the average intensity value change in the vertical direction;
If both are large, then at the corner locations, the corner response function S is defined as follows:
S=λ1λ2-k(λ12)2=det(M)-trace2(M)
Wherein, lambda 1, lambda 2-two eigenvalues of matrix M;
det-matrix determinant;
trace—trace of matrix.
And (3) regarding the center of the window with the response function larger than a certain threshold delta as a corner point. As the overall grayscale of the pipeline scene tends to be smooth. Meanwhile, non-maximum suppression is added in the algorithm, and corner points adjacent to the extremum are filtered, so that the detection corner points are distributed in the view angle in a more dispersed manner.
The feature point tracking is inter-Pa feature point tracking, the detected corner points are tracked by utilizing a Lucas-kanade algorithm according to the continuity of video frames, the new positions of the corner points of the current frame in subsequent frames are judged, and the deflection degree of the camera is judged according to the change amplitude of the positions. Assuming that the same feature point intensity values in adjacent frames are unchanged, this process looks for the following displacement (μ, γ):
It(x,y)=It+1(x+μ,y+v)
Wherein It and it+1 respectively represent the current frame and the next instant frame of the image in the pipeline acquired by the mechanical device;
the assumption that the intensity value is unchanged is applicable to small displacements on adjacent images, the expression taylor is expanded to obtain the following expression, and the two terms representing the intensity values are removed to obtain the expression:
The single equation cannot solve for two unknowns, the L-K algorithm assumes that the optical flow vector remains unchanged in the neighborhood of pixels, and then uses the least squares method to solve the optical flow equation for all pixels in the neighborhood. However, in order to improve the accuracy of the tracking result, a smaller neighborhood window should be selected, while in order to handle fast and long movements a large neighborhood window should be selected. To solve this contradiction, a pyramid L-K algorithm was introduced.
The pyramid L-K algorithm is an improved Lucas-Kanade algorithm (which is proposed for solving the problem that the resolution of a tracking result cannot be simultaneously processed and the motion with long processing speed can not be simultaneously processed in the feature point tracking), an optical flow is calculated at the topmost layer of an image pyramid, an estimated motion result at the upper layer is used as an initial value of a pyramid at the lower layer, an optical flow vector is calculated on the basis of the upper layer at the lower layer, the estimation is repeated until the bottommost layer of the pyramid, and the bottommost layer optical flow vector is used as a final result.
The streamer calculation adopts a bidirectional optical flow method;
the bidirectional countercurrent method comprises the steps of detecting a characteristic point St of a current frame by using an L-K algorithm, wherein the characteristic point St is the angular point, and simultaneously, carrying out reverse calculation on the characteristic point St+1 corresponding to the next frame It+1, taking It as the current frame, and calculating a characteristic point Str corresponding to St+1 in It, wherein if the deviation of St and Str is smaller than a certain threshold value, the St and the Str are judged to be successful in tracking, and through actual verification, the bidirectional optical flow method effectively improves the tracking effect of the characteristic point of the scene, and after the position information of the characteristic point of an adjacent frame is obtained, the method is characterized by the following formula:
wherein n is the number of the characteristic points of the current frame, S (t+1) i-Sti is the sum of the x and y coordinate differences of the frames before and after the characteristic point i, delta is a set threshold value, and if the formula is satisfied, the frame is considered to be a pseudo-abnormal frame and is removed from the defect detection sequence.
The pipeline defect detection algorithm of the YOLO-TD is based on a YOLO v5s model, a Swin transform structure is introduced to extract semantic features, a DSPP module and a loss function; evolved from the above.
Inputting the image to be detected after the pseudo-defect image filtration into a backbone network of a YOLO v5s model for feature extraction, then strengthening the robustness of the features through a neck network, and finally detecting the pipeline defects through a detection head consisting of convolution layers.
The YOLO v5s model includes: an input, a reference network, neck networks, and a Head output;
The input end represents the picture extracted by the input mechanical device; the input image size of the network is 608 x 608, and the network comprises an image preprocessing stage, namely scaling the input image to the input size of the network, normalizing and the like, wherein in a network training stage YOLOv, the training speed of the model and the network precision are improved by using the Mosaic data enhancement operation; and a self-adaptive anchor frame calculation and self-adaptive picture scaling method is provided.
The reference network is typically a network of some sort of excellent classifier, which is used to extract some generic feature representation contained in the input extraction picture. YOLOv5 not only uses the CSPDARKNET structure, but also uses the Focus structure as a reference network,
The CSPDARKNET structure is divided into two types, a CSP1_X structure is applied to a reference network, and another CSP2_X structure is applied to a Neck network.
The Focus structure is used for slicing the image, so that the length and width information of the original image is concentrated into the channel space, a double downsampling characteristic diagram with unchanged image information is obtained, and the detection accuracy and the detection efficiency are improved. .
The Neck network is usually positioned in the middle of the reference network and the head network, so that the diversity and the robustness of the picture characteristics extracted by the input end can be further improved by using the Neck network, and the picture characteristics are realized by using the spp module.
The Head is used for completing output of a target detection result. The number of branches at the output end is different for different detection algorithms, and usually comprises a classification branch and a regression branch.
The YOLO-TD provided by the invention does not use CSPDARKNET as a backbone network, but selects a Swin Transformer with better performance, so that the backbone network can extract more advanced semantic features.
The Swin Transformer architecture limits self-attention computation to non-overlapping local windows with moving windows while allowing cross-window connections, specifically, swin Transformer by replacing the multi-headed self-attention (MSA) module in the Transformer with a moving window based module. Meanwhile, layerNorm layers are applied before each MSA module and each MLP, and residual connection is applied after each module.
However, the SPP module only performs pooling operation in the detection process, so that information such as details of image features is irreversibly lost, and the problem of high omission rate in the pipeline defect detection process is caused. The present invention therefore proposes DSPP modules to replace the original SPP modules for pooling operations
The DSPP module performs the maximized pooling operation of different scales through four branches like the SPP after the image characteristics obtained by the Swin transform structure are input into the DSPP, the maximized pooling information of each branch can be used as the input of pooling operation together with the characteristic information of other branches, and the complementary pooling information quantity is determined by the loss information quantity, so that the problem of information loss existing in the SPP is effectively solved, and the accuracy of pipeline defect detection is further improved.
When the input characteristic information is X, the output information Y of the DSPP module is shown as the formula:
Y=X+X2+X3+X4
Wherein X 2,X3,X4 is represented by the formula:
X2=P5(X)
X3=P9(X+X2)
X4=P13(X+X2+X3)
The loss function consists of three losses in total: target Loss function loss_obj, class Loss function loss_cls, and box Loss function loss_box. Wherein, both loss_obj and loss_cls are calculated by cross entropy Loss loss_bec:
wherein: n is the total number of images, is a Sigmoid function, ti and Label true and predicted values, respectively. The detection box Loss function loss_box is shown as:
Loss_box=1-GIoU
Wherein GIoU is shown as the formula:
wherein: ioU is an intersection ratio, A is a detection frame, B is a prediction frame, and C is a minimum closed frame capable of containing the detection frame and the prediction frame. The Loss function Loss_of the detection model is shown as the formula:
Loss=αLoss_obj+βLoss_cls_+γ(1-GIoU)
Wherein: α=0.45, β=0.5, γ=0.05.
The pipe defect detection system uses PyQt5 to design a graphical user interface for end-to-end detection.
The interface can be used for introducing video images in the pipeline shot by the pipeline robot and then automatically generating a corresponding pipeline defect detection report. The test detects 100 pipeline images, wherein the test comprises 80 normal samples and 20 defective samples, the normal samples comprise defects of cracks, disjoints, barriers and branch pipe hidden connection types, and in the test, the types of pipeline defects are correctly judged.
A pipeline detection method of a pipeline defect detection robot comprises the following steps;
Step 1: the real-time picture is transmitted to the PC end through the electronic control part by collecting information through a camera of a shooting device in the mechanical system, so that the trolley can be controlled to perform defect detection forwards after error-free operation;
step 2: filtering the pseudo-defect image, transmitting the image to an image acquisition module after the image acquisition is completed through a camera, placing a small window around the assumed angular point, observing the average change of the intensity value in a certain direction in the window, and assuming that the displacement vector is (mu, gamma), wherein the average intensity value change is:
wherein I (x, y) -the pixel gray level of the image;
W (x, y) -interest point neighborhood;
taylor expansion of formula (1) gives formula (2), and conversion of formula (2) into matrix form gives formula (3):
The middle matrix is recorded as M, and two characteristic values of the matrix respectively represent the maximum average intensity value change and the average intensity value change in the vertical direction; if both are large, then it is the location of the corner point. Defining the corner response function S as equation 4:
S=λ1λ2-k(λ12)2=det(M)-trace2(M) (4)
Wherein, lambda 1, lambda 2-two eigenvalues of matrix M;
det-matrix determinant;
trace—trace of matrix.
Recognizing a window center with a response function larger than a certain threshold delta as a corner point; since the overall gray level of the pipeline scene tends to be smooth, delta is set to be 0.1, and the window size WinSize is set to be 15; meanwhile, adding non-maximum suppression in the algorithm, filtering corner points adjacent to the extremum, and enabling the detection corner points to be distributed in the view angle in a more dispersed manner;
Step 3: tracking the detected corner points by utilizing a Lucas-kanade algorithm according to the continuity of the video frames, judging the new position of the corner points of the current frame in the subsequent frames, and judging the deflection degree of the camera according to the position change amplitude; assuming that the same feature point intensity values in adjacent frames are unchanged, this process looks for the following displacement (μ, γ):
It(x,y)=It+1(x+μ,y+v)(1)
Where It and it+1 represent the current frame and the next instant frame, respectively. The assumption that the intensity value is unchanged applies to small displacements on adjacent images;
Taylor expansion of formula (1) yields formula (2), and removal of two terms representing intensity values yields formula (3):
the single equation cannot solve for two unknowns, the L-K algorithm assumes that the optical flow vector remains unchanged in the neighborhood of pixels, and then uses the least squares method to solve the optical flow equation for all pixels in the neighborhood. However, in order to improve the accuracy of the tracking result, a smaller neighborhood window should be selected, while in order to handle fast and long movements a large neighborhood window should be selected. In order to solve the contradiction, a pyramid L-K algorithm is introduced, optical flow is calculated at the topmost layer of the image pyramid, the motion result estimated at the previous layer is used as the initial value of the pyramid at the next layer, the optical flow vector is calculated at the next layer on the basis of the previous layer, the estimation is repeated until the bottommost layer of the pyramid, and the bottommost layer optical flow vector is used as the final result. The strategy can enable the L-K algorithm to be suitable for high-speed and large-displacement motions, so that accuracy and robustness of the algorithm in a real detection scene are improved.
And detecting a characteristic point St+1 corresponding to the characteristic point St of the current frame in the next frame It+1 by using an L-K algorithm in the current frame It, simultaneously carrying out reverse calculation, taking It+1 as the current frame, taking It as the next frame, calculating a characteristic point Str corresponding to St+1 in the It, and judging that tracking is successful if the deviation between St and Str is smaller than a certain threshold value.
As shown below
Wherein n is the number of the characteristic points of the current frame, S (t+1) i-Sti is the sum of the x and y coordinate differences of the frames before and after the characteristic point i, delta is a set threshold value, and if the above formula is met, the frame is considered to be a pseudo-abnormal frame and is removed from the defect detection sequence;
Step 4: after the pseudo-defect image is filtered, a pipeline defect detection algorithm based on YOLO-TD is reused for detecting the photo extracted through the mechanical part, firstly, an image to be detected is input into a Swin transducer for feature extraction, the Swin transducer limits self-attention calculation to non-overlapping local windows through a moving window, and meanwhile cross-window connection is allowed, so that efficiency is improved. In addition, the hierarchical structure has the flexibility of modeling under different scales, and the capability of capturing multi-scale information by the model is enhanced. Specifically, swin transducer is a module based on moving window by replacing a multi-head self-attention (MSA) module in the transducer, so that the problem of slow speed of the transducer applied to the field of computer vision, which is a long-felt industry, is solved. Meanwhile, layerNorm layers are applied before each MSA module and each MLP, and residual connection is applied after each module.
Step 5: when the image features are extracted and then input into DSPP, the image features are subjected to different-scale maximization pooling operation through four branches of 1×1, 5×5, 9×9 and 13×13 as well as SPP, the maximization pooling information of each branch can be used as the input of pooling operation together with the feature information of other branches, and the complementary pooling information quantity is determined by the loss information quantity, so that the problem of information loss of SPP is effectively solved, the accuracy of pipeline defect detection is further improved, and when the input feature information is X, the output information Y of a DSPP module is shown as a formula (6):
Y=X+X2+X3+X4 (6)
Wherein X 2,X3,X4 is represented by the following formulas (7) to (9):
X2=P5(X) (7)
X3=P9(X+X2) (8)
X4=P13(X+X2+X3) (9)
max pooling operation with Pa being a×a; "+" is a concat operation.
Step 6: and the robustness of the characteristics is enhanced through the neck network, and finally, pipeline defect detection is carried out through a detection head formed by the convolution layers.
The invention has the beneficial effects that:
1. The invention utilizes the computer vision technology to automatically detect the defect types of the acquired pipeline images.
2. The invention uses the Swin transducer structure which is easy to capture global information to extract stronger semantic features, and enhances the extraction capability of the network to defect features.
3. The invention designs a dense space pyramid pooling module and introduces the module into the YOLO-TD to acquire more abundant detail information in the target.
4. Aiming at the problem of false defect images generated due to pipeline topography change, artificial deflection control and the like, an improved bidirectional optical flow method is utilized to monitor the pose change of a camera, and the false abnormal images generated by overlarge pose change amplitude are filtered.
Drawings
FIG. 1 is a schematic diagram of the overall scheme of the present invention.
FIG. 2 is a block diagram of the overall detection system of the present invention.
Fig. 3 is a basic wheelset layout of the present invention.
FIG. 4 is a diagram of a control center module of the present invention.
FIG. 5 is a block diagram of a servo control system of the present invention.
FIG. 6 is a block diagram of a fuzzy adaptive PID controller according to the invention.
FIG. 7 is a schematic diagram of the pyramid Lucas-Kanad optical flow of the present invention.
FIG. 8 is a diagram of the YOLO-TD network of the present invention.
FIG. 9 is a network diagram of the Swin transducer feature extraction of the present invention.
Fig. 10 is a view showing the structure of the DSPP of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
As shown in fig. 1 and 2, the present invention provides a pipeline inspection robot capable of performing image collection and identification classification of defects inside a pipeline. Comprises a mechanical part, an electric control part, an algorithm identification part and a defect detection report.
The robot is sent to the occasion that needs work, and the robot starts the camera at the initial position and transmits the real-time picture to the PC end through the communication cable, so that the trolley can be controlled to move forward for defect detection after error is avoided. Because the environment inside the pipeline is inspired to be severe and complex, operators need to operate in real time to ensure that the robot can safely and effectively shoot an analysis picture to classify defects.
The mechanical part consists of a supporting device, a driving device and a shooting device.
The supporting part is composed of a top plate and a bottom plate, and is constructed into a hollow framework in a bolt connection mode through penetrating nails, the reserved place in the middle is used for placing an electric control system device, and the control device is fixed on the bottom plate by means of set screw connection, so that the reliability of the robot in the moving process is ensured. In order to enhance the terrain adaptability of the pipeline robot, a single-rotating-shaft connection is designed on a chassis of a machine body of the pipeline robot. The design divides the chassis of the machine body into two parts which are connected in a coaxial way through a multi-bearing, so that the relative freedom of movement is provided between the front chassis and the rear chassis and between the front wheel and the rear wheel. The design ensures that the robot has better terrain adaptability, ensures that four wheels can grab the ground simultaneously, and reduces the motion error.
The driving device is responsible for the movement of the robot and mainly comprises a direct current motor, a coupler and a driving wheel, wherein the motor drives the driving wheel to complete the movement of the pipeline robot through the coupler. In order to realize flexible operability of the robot, the problems of defect obstruction and the like in the pipeline are faced. The Mecanum wheel is selected as a driving wheel of the pipeline detection robot, so that omnidirectional flexible movement is realized. Meanwhile, the roller can rotate around the axis of the roller, the central axis of the hub and the contact point of the roller and the ground, and three degrees of freedom are provided. The whole wheel can rotate around the central axis of the wheel hub and move in the vertical direction around the axis of the roller, and has two degrees of freedom. Therefore, a plurality of Mecanum wheels can be combined into Mecanum wheel groups according to a specific arrangement mode, and the omnidirectional movement of the whole system is realized by controlling the rotating speed of each wheel.
Because the rollers are mounted on the hub in an offset manner and the offset directions are different, the intelligent pipeline robot adopting four-wheel drive has various wheel group layout manners.
The invention adopts a wheel group layout mode shown in figure 3, and the offset angles of the rollers of the 4 Mecanum wheels are all + -alpha, wherein the anticlockwise direction of an included angle with x is positive.
The image acquisition is to acquire images by using an OV5640 camera, the control system is used for controlling the working condition of the pipeline robot in real time, various control elements are placed on the bottom plate, and the control system is respectively connected with the motor driver, the image acquisition module and the light source module. The structure is shown in figure 4.
The motor driver is further connected with the first motor, the second motor, the third motor and the fourth motor.
Further, the driving wheel can rotate under the driving of the first motor, the second motor, the third motor and the fourth motor.
The image acquisition module is used for carrying out image acquisition on the internal information of the pipeline, and sending the acquired image information to the control system for identification and classification.
The light source module analyzes the image information through the upper computer, so that signals are sent to the light source module, and a light source is generated.
The control module uses a singlechip as a control core, a direct-current brush motor as an executive component, a pipeline robot (wheel speed) as a controlled object, and a Hall speed measuring component as a detection device adopts a cycle measurement method to carry out a full-closed loop speed type servo control system, and the structure of the full-closed loop speed type servo control system is shown in figure 5.
In the aspect of control algorithm, because the actual pipeline is influenced by uncertain factors such as self quality, external corrosion and the like, the control process of the pipeline robot is complex and variable, an accurate mathematical model is difficult to build, and the traditional PID control effect is poor. And the fuzzy self-adaptive PID control is adopted, as shown in figure 6, namely, on-line adjustment of PID control parameters is realized through fuzzy reasoning according to the real-time feedback condition of the control system.
The image acquisition module is used for carrying out image acquisition on the internal information of the pipeline, and then transmitting the internal information to the upper computer through a cable, and at the moment, the algorithm identification part carries out identification classification.
Abnormal visual angles appear in the image due to severe camera shake, large-amplitude deflection controlled by human and the like caused by pipeline topography change, and the visual angles usually appear in the video segment in a transient state of motion, and have high probability of being identified as abnormal frames with defects.
Therefore, in the pseudo-defect image filtering part, the pose change of the camera is monitored by using an improved Lucas-Kanade algorithm, and video frames with the deviation amplitude larger than a certain threshold value are removed from the prediction sequence, so that the over-detection rate can be effectively reduced. The principle of the L-K algorithm is that angular points of a current video frame are detected, then the detected angular points are tracked by utilizing the optical flow principle, and the pose change of a camera is monitored by utilizing the moving distance and direction of the angular points. The corner point is defined as a pixel point with relatively high average intensity variation values in a plurality of directions. A small window is placed around the assumed point of interest and the average change in intensity values in a certain direction within the window is observed. Assuming a displacement vector of (μ, γ), the average intensity value change is:
wherein I (x, y) -the pixel gray level of the image;
W (x, y) -interest point neighborhood.
Taylor expansion of equation 1 yields equation 2, and conversion of equation 2 into a matrix form yields equation 3.
The middle matrix is denoted as M, and the two eigenvalues of the matrix represent the maximum average intensity value variation and the average intensity value variation in the vertical direction thereof, respectively. If both are large, then it is the location of the corner point. Setting a corner sense response function S:
S=λ1λ2-k(λ12)2=det(M)-trace2(M)
Wherein, lambda 1, lambda 2-two eigenvalues of matrix M;
det-matrix determinant;
trace—trace of matrix.
And (3) regarding the center of the window with the response function larger than a certain threshold delta as a corner point. Since the overall grayscale of the pipeline scene tends to be smooth, δ is set to 0.1 and the window size windsize is set to 15. Meanwhile, non-maximum suppression is added in the algorithm, and corner points adjacent to the extremum are filtered, so that the detection corner points are distributed in the view angle in a more dispersed manner.
And after the corner is identified, tracking the detected corner by utilizing a pyramid L-K algorithm according to the continuity of the video frame, judging the new position of the corner of the current frame in the subsequent frame, and judging the deflection degree of the camera according to the position change amplitude.
The pyramid L-K algorithm is to calculate the optical flow at the top layer of the image pyramid, take the motion result estimated by the previous layer as the initial value of the next layer pyramid, calculate the optical flow vector on the basis of the next layer, repeat the estimation until reaching the bottom layer of the pyramid, and take the optical flow vector at the bottom layer as the final result, as shown in fig. 7.
Because the background foreground of the related scene is moving, the error tracking is easy to generate. To solve this problem, a bidirectional optical flow method is adopted. Namely, the characteristic point St+1 corresponding to the characteristic point St of the current frame in the next frame It+1 is detected by using an L-K algorithm in the current frame It, meanwhile, reverse calculation is carried out, it+1 is taken as the current frame, it is taken as the next frame, the characteristic point Str corresponding to St+1 in It is calculated, and tracking is considered successful if the deviation between St and Str is smaller than a certain threshold value. Through practical verification, the bidirectional optical flow method effectively improves the tracking effect of the scene feature points. After the feature point position information of the adjacent frames is acquired, the following formula is adopted:
wherein n is the number of the characteristic points of the current frame, S (t+1) i-Sti is the sum of the x and y coordinate differences of the frames before and after the characteristic point i, delta is a set threshold value, and if the formula is satisfied, the frame is considered to be a pseudo-abnormal frame and is removed from the defect detection sequence.
In the visual detection algorithm, a pipeline defect detection algorithm based on YOLO-TD is used, an image to be detected is input into a main network for feature extraction, then robustness of features is enhanced through a neck network, and finally pipeline defect detection is performed through a detection head formed by a convolution layer. As shown in fig. 8. Unlike YOLO v5s, the YOLO-TD proposed herein no longer uses CSPDARKNET as the backbone network, but instead selects a better performing Swin Transformer, allowing the backbone network to extract more advanced semantic features. Meanwhile, in YOLO-TD, dense spatial pyramid pooling DSPP modules are designed to replace SPP modules in YOLO v5s, thereby capturing multi-scale detail features.
Swin transducer uses moving windows to limit self-attention calculations to non-overlapping local windows while allowing cross-window connections, thereby improving efficiency. In addition, the hierarchical structure has the flexibility of modeling under different scales, and the capability of capturing multi-scale information by the model is enhanced. Specifically, swin transducer is a module based on moving window by replacing a multi-head self-attention (MSA) module in the transducer, so that the problem of slow speed of the transducer applied to the field of computer vision, which is a long-felt industry, is solved. Meanwhile, layerNorm layers are applied before each MSA module and each MLP, and residual connection is applied after each module, as shown in fig. 9.
After the image features are input into the DSPP, the maximized pooling operation of different scales is carried out through pools of 1×1, 5×5, 9×9 and 13×13 like the SPP, the maximized pooling information of each branch is taken as the input of the pooling operation together with the feature information of other branches, meanwhile, the complementary pooling information quantity is also determined by the loss information quantity, and when the input feature information is X, the output information Y of the DSPP module is shown as the formula:
Y=X+X2+X3+X4
Wherein X 2,X3,X4 is represented by the formula:
X2=P5(X)
X3=P9(X+X2)
X4=P13(X+X2+X3)
Wherein: max pooling operation with Pa being a×a; "+" is the concat operation, and DSPP is schematically shown in FIG. 10.
The loss function of the detection model YOLO-TD consists of three losses: target Loss function loss_obj, class Loss function loss_cls, and box Loss function loss_box. Wherein, both loss_obj and loss_cls are calculated by cross entropy Loss loss_bec:
wherein: n is the total number of images, is a Sigmoid function, ti and Label true and predicted values, respectively. The detection box Loss function loss_box is shown as:
Loss_box=1-GIoU
Wherein GIoU is shown as the formula:
Wherein: ioU is an intersection ratio, A is a detection frame, B is a prediction frame, and C is a minimum closed frame capable of containing the detection frame and the prediction frame. The Loss function loss_of the detection model herein is shown as:
Loss=αLoss_obj+βLoss_cls_+γ(1-GIoU)
Wherein: α=0.45, β=0.5, γ=0.05.
Simulation test: the present pipeline defect detection system uses PyQt5 to design a graphical user interface for end-to-end detection.
The interface can be used for introducing video images in the pipeline shot by the pipeline robot and then automatically generating a corresponding pipeline defect detection report. The test detects 100 pipeline images, wherein the test comprises 80 normal samples and 20 defective samples, the normal samples comprise defects of cracks, disjoints, barriers and branch pipe hidden connection types, and in the test, the types of pipeline defects are correctly judged.

Claims (9)

1. The pipeline defect detection robot is characterized by comprising a mechanical system, a control system, an image recognition system and a pipeline defect detection system;
The mechanical system is used for providing power, protection, structural support and shooting for the movement of the robot;
The control system transmits pictures in real time, so that an operator can conveniently plan a path, and the transmission of the pictures and the remote control of a mechanical system, an image recognition system and a pipeline defect detection system by an upper computer are realized through cables;
The image recognition system is used for acquiring an image shot by the mechanical system, then guiding the image into a computer for detection through an algorithm, so as to judge whether the pipeline is defective;
the pipeline defect detection system automatically generates a corresponding pipeline defect detection report by importing the video image in the pipeline identified by the image identification system.
2. The pipe defect inspection robot of claim 1, wherein the mechanical system comprises a support device, a drive device, a camera device;
the supporting device is used for protecting and structurally supporting the robot and is square and comprises a top plate and a bottom plate, the top plate and the bottom plate are connected through screws and are connected through bolts to form a hollow framework, an electric control system device is placed in the middle hollow framework, and the bottom of the bottom plate is connected with a control system through a set screw;
The driving devices are arranged on two sides of the supporting device and comprise driving wheels, and the driving wheels adopt Mecanum wheels as driving wheels of the pipeline detection robot;
The shooting device is arranged in front of the bottom plate of the supporting device and is a visual sensor and used for detecting defect information of the pipeline.
3. The pipeline defect detection robot of claim 1, wherein the control system is used for controlling the working condition of the pipeline robot in real time, and various control elements are placed on the bottom plate; the control system is divided into a motor driver module, an image acquisition module and a light source module;
the motor driving module drives a driving device of the robot through a motor;
the image acquisition module is used for acquiring images of the internal information of the pipeline by receiving the pictures shot by the shooting device, and sending the acquired image information to the image recognition system for recognition and classification;
the light source module analyzes the image information through the upper computer, so that signals are sent to the light source module, and a light source is generated.
4. The pipeline defect detection robot of claim 1, wherein the image recognition system comprises a pipeline defect detection algorithm module of YOLO-TD;
the image recognition system filters a false abnormal frame caused by the defects of a non-pipeline by using a pipeline defect detection algorithm module of YOLO-TD to detect the false abnormal frame;
The pseudo-defect image filtering is to monitor the pose change of a camera of a shooting device of a mechanical system by utilizing an improved Lucas-Kanade algorithm, and remove video frames with the deviation amplitude being larger than a certain threshold value from a prediction sequence;
and detecting the characteristic points of the current video frame, tracking the characteristic points of the detected characteristic points by utilizing an optical flow principle, and monitoring the pose change of the camera by utilizing the moving distance and direction of the characteristic points.
5. The robot for detecting pipeline defects according to claim 4, wherein the feature point detection is to select a plurality of points on an image as angular points for the image in the pipeline transmitted back in real time by the image acquisition module, select the angular points as feature points to be tracked, place a small window around the assumed angular points, and observe the average change of intensity values in a certain direction in the window.
6. The robot for detecting a pipe defect according to claim 5, wherein the corner points are pixel points having relatively high average intensity variation values in a plurality of directions;
Assuming a displacement vector of (μ, γ), the average intensity value variation is:
I (x, y) -the gray scale of the pixels of the image in the pipeline;
W (x, y) -the corresponding pixel coordinate locations within the window determined by the pipeline interior image;
taylor expansion of formula (1) gives formula (2), and conversion of formula (2) into matrix form gives formula (3):
The middle matrix is recorded as M, and two characteristic values of the matrix respectively represent the maximum average intensity value change and the average intensity value change in the vertical direction;
the corner response function S is defined as follows:
S=λ1λ2-k(λ12)2=det(M)-trace2(M)
Wherein, lambda 1, lambda 2-two eigenvalues of matrix M;
det-matrix determinant;
trace—trace of matrix;
And (3) recognizing the center of a window with a response function larger than a certain threshold delta as a corner point, adding non-maximum suppression into an algorithm, filtering corner points with adjacent extremum values, and enabling detection corner points to be distributed in the view angle in a more scattered manner.
7. The robot for detecting pipeline defects according to claim 5, wherein the characteristic point tracking is inter-pa characteristic point tracking, the detected corner points are tracked by utilizing a Lucas-kanade algorithm according to the continuity of video frames, the new position of the current frame corner point in a subsequent frame is judged, the deflection degree of a camera is judged according to the change amplitude of the position, the intensity value of the same characteristic point in adjacent frames is unchanged, and the following displacement (mu, gamma) is searched:
It(x,y)=It+1(x+μ,y+v)
Wherein It and it+1 respectively represent the current frame and the next instant frame of the image in the pipeline acquired by the mechanical device;
the assumption that the intensity value is unchanged is applicable to small displacements on adjacent images, the expression taylor is expanded to obtain the following expression, and the two terms representing the intensity values are removed to obtain the expression:
8. The robot for detecting pipeline defects according to claim 7, wherein the pyramid L-K algorithm is a modified Lucas-Kanade algorithm, wherein the optical flow is calculated at the topmost layer of the image pyramid, the estimated motion result of the previous layer is used as the initial value of the pyramid of the next layer, the optical flow vector is calculated at the next layer on the basis of the previous layer, such estimation is repeated until the bottommost layer of the pyramid, and the bottommost layer optical flow vector is used as the final result;
The streamer calculation adopts a bidirectional optical flow method;
the bidirectional countercurrent method comprises the steps of detecting a characteristic point St of a current frame by using an L-K algorithm, wherein the characteristic point St is the angular point, and simultaneously, carrying out reverse calculation on the characteristic point St+1 corresponding to the next frame It+1, taking It as the current frame, and calculating a characteristic point Str corresponding to St+1 in It, wherein if the deviation of St and Str is smaller than a certain threshold value, the St and the Str are judged to be successful in tracking, and through actual verification, the bidirectional optical flow method effectively improves the tracking effect of the characteristic point of the scene, and after the position information of the characteristic point of an adjacent frame is obtained, the method is characterized by the following formula:
wherein n is the number of the characteristic points of the current frame, S (t+1) i-Sti is the sum of the x and y coordinate differences of the frames before and after the characteristic point i, delta is a set threshold value, and if the formula is satisfied, the frame is considered to be a pseudo-abnormal frame and is removed from the defect detection sequence.
9. A pipe inspection method of a pipe defect inspection robot according to any one of claims 1 to 8, comprising the steps of;
Step 1: the real-time picture is transmitted to the PC end through the electronic control part by collecting information through a camera of a shooting device in the mechanical system, so that the trolley can be controlled to perform defect detection forwards after error-free operation;
step 2: filtering the pseudo-defect image, transmitting the image to an image acquisition module after the image acquisition is completed through a camera, placing a small window around the assumed angular point, observing the average change of the intensity value in a certain direction in the window, and assuming that the displacement vector is (mu, gamma), wherein the average intensity value change is:
wherein I (x, y) -the pixel gray level of the image;
W (x, y) -interest point neighborhood;
taylor expansion of formula (1) gives formula (2), and conversion of formula (2) into matrix form gives formula (3):
the middle matrix is recorded as M, and two characteristic values of the matrix respectively represent the maximum average intensity value change and the average intensity value change in the vertical direction; if both are large, then at the corner locations, the corner response function S is defined as equation (4):
S=λ1λ2-k(λ12)2=det(M)-trace2(M) (4)
Wherein, lambda 1, lambda 2-two eigenvalues of matrix M;
det-matrix determinant;
trace—trace of matrix;
Recognizing a window center with a response function larger than a certain threshold delta as a corner point; since the overall gray level of the pipeline scene tends to be smooth, delta is set to be 0.1, and the window size WinSize is set to be 15; meanwhile, adding non-maximum suppression in the algorithm, filtering corner points adjacent to the extremum, and enabling the detection corner points to be distributed in the view angle in a more dispersed manner;
Step 3: tracking the detected corner points by utilizing a Lucas-kanade algorithm according to the continuity of the video frames, judging the new position of the corner points of the current frame in the subsequent frames, and judging the deflection degree of the camera according to the position change amplitude; assuming that the same feature point intensity values in adjacent frames are unchanged, this process looks for the following displacement (μ, γ):
It(x,y)=It+1(x+μ,y+v)(1)
Where It and it+1 represent the current frame and the next instant frame, respectively. The assumption that the intensity value is unchanged applies to small displacements on adjacent images;
Taylor expansion of formula (1) yields formula (2), and removal of two terms representing intensity values yields formula (3):
Firstly, calculating an optical flow at the topmost layer of an image pyramid, using a motion result estimated at the previous layer as an initial value of a pyramid at the next layer, calculating an optical flow vector at the next layer on the basis of the previous layer, repeating the estimation until the optical flow vector at the bottommost layer of the pyramid is used as a final result;
Detecting a corresponding characteristic point St+1 of a characteristic point St of a current frame in a next frame It+1 by using an L-K algorithm in the current frame It, simultaneously carrying out reverse calculation, taking It+1 as the current frame, taking It as the next frame, calculating a corresponding characteristic point Str of St+1 in the It, and judging that tracking is successful if the deviation between St and Str is smaller than a certain threshold value;
As shown below
Wherein n is the number of the characteristic points of the current frame, S (t+1) i-Sti is the sum of the x and y coordinate differences of the frames before and after the characteristic point i, delta is a set threshold value, and if the above formula is met, the frame is considered to be a pseudo-abnormal frame and is removed from the defect detection sequence;
Step 4: after filtering the pseudo-defect image, detecting the photo extracted by the mechanical part by utilizing a pipeline defect detection algorithm based on YOLO-TD, firstly inputting the image to be detected into a Swin transducer for feature extraction, and restricting self-attention calculation to non-overlapping local windows by the Swin transducer by using a moving window, and simultaneously allowing cross-window connection;
Step 5: inputting the image features into DSPP after extracting the image features, carrying out the maximized pooling operation of different scales through four branches of 1×1, 5×5, 9×9 and 13×13 respectively like SPP, wherein the maximized pooling information of each branch can be used as the input of the pooling operation together with the feature information of other branches;
When the input characteristic information is X, the output information Y of the DSPP module is shown in formula (6):
Y=X+X2+X3+X4 (6)
Wherein X 2,X3,X4 is represented by the following formulas (7) to (9):
X2=P5(X) (7)
X3=P9(X+X2) (8)
X4=P13(X+X2+X3) (9)
Max pooling operation with Pa being a×a; "+" is a concat operation;
Step 6: and the robustness of the characteristics is enhanced through the neck network, and finally, pipeline defect detection is carried out through a detection head formed by the convolution layers.
CN202410109184.9A 2024-01-25 2024-01-25 Pipeline defect detection robot and detection method Pending CN117969554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410109184.9A CN117969554A (en) 2024-01-25 2024-01-25 Pipeline defect detection robot and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410109184.9A CN117969554A (en) 2024-01-25 2024-01-25 Pipeline defect detection robot and detection method

Publications (1)

Publication Number Publication Date
CN117969554A true CN117969554A (en) 2024-05-03

Family

ID=90856689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410109184.9A Pending CN117969554A (en) 2024-01-25 2024-01-25 Pipeline defect detection robot and detection method

Country Status (1)

Country Link
CN (1) CN117969554A (en)

Similar Documents

Publication Publication Date Title
US10269138B2 (en) UAV inspection method for power line based on human visual system
CN107909575B (en) Binocular vision on-line detection device and detection method for running state of vibrating screen
CN107633267A (en) A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method
CN103903237B (en) Sonar image sequence assembly method is swept before one kind
CN113284109B (en) Pipeline defect identification method, device, terminal equipment and storage medium
CN110766785B (en) Real-time positioning and three-dimensional reconstruction device and method for underground pipeline
CN106679567A (en) Contact net and strut geometric parameter detecting measuring system based on binocular stereoscopic vision
CN102927448A (en) Undamaged detection method for pipeline
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
Chen et al. An intelligent sewer defect detection method based on convolutional neural network
Papadopoulos et al. 3D-surface reconstruction for partially submerged marine structures using an autonomous surface vehicle
CN111915649A (en) Strip steel moving target tracking method under shielding condition
Yue et al. Automatic obstacle-crossing planning for a transmission line inspection robot based on multisensor fusion
CN117969554A (en) Pipeline defect detection robot and detection method
CN117237597A (en) Data processing terminal based on Beidou satellite data and AI graph fusion
Doukovska et al. Image processing for technological diagnostics of metallurgical facilities
CN207923179U (en) Intelligent patrol detection vehicle
CN116804359A (en) Virtual rock core imaging equipment and method based on forward-looking panoramic drilling shooting
CN113758662B (en) Pipeline connection tightness detection system of hydraulic hoist
CN114735044A (en) Intelligent railway vehicle inspection robot
CN114187663A (en) Method for controlling unmanned aerial vehicle by posture based on radar detection gray level graph and neural network
CN111259762B (en) Pantograph abnormity detection method
Fu et al. Vision based navigation for power transmission line inspection robot
CN116894775B (en) Bolt image preprocessing method based on camera motion model recovery and super-resolution
Gao et al. A new method for repeated localization and matching of tunnel lining defects

Legal Events

Date Code Title Description
PB01 Publication