WO2022116433A1 - 管道损伤检测方法、装置、设备及存储介质 - Google Patents

管道损伤检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022116433A1
WO2022116433A1 PCT/CN2021/083573 CN2021083573W WO2022116433A1 WO 2022116433 A1 WO2022116433 A1 WO 2022116433A1 CN 2021083573 W CN2021083573 W CN 2021083573W WO 2022116433 A1 WO2022116433 A1 WO 2022116433A1
Authority
WO
WIPO (PCT)
Prior art keywords
pipeline
detection
frame
preset
damage
Prior art date
Application number
PCT/CN2021/083573
Other languages
English (en)
French (fr)
Inventor
刘杰
王健宗
瞿晓阳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022116433A1 publication Critical patent/WO2022116433A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a pipeline damage detection method, device, equipment and storage medium.
  • the target detection method in computer vision technology is used to identify the target object, which can effectively extract the key target in the picture or video, so as to achieve the effect of identification.
  • the inventor realizes that traditional pipeline inspection mainly relies on manual experience, and damage identification using manual experience is very error-prone and the inspection efficiency is very low, which cannot meet the effective and timely maintenance requirements of a large number of sewer pipes in cities.
  • the main purpose of this application is to solve the technical problem of low efficiency of pipeline damage detection at present.
  • a first aspect of the present application provides a pipeline damage detection method, comprising: acquiring a pipeline inspection video to be detected; inputting the pipeline inspection video into a preset pipeline damage detection model to perform frame-by-frame detection, and outputting an output. Detection result; if the detection result is that there is pipeline damage in the current video frame, call the preset OpenCV interface to visualize the five-dimensional vector in the detection result as a detection frame; inspect the detection frame and the pipeline The corresponding video frames in the video are combined to obtain the pipeline inspection annotation video marked with the pipeline damage location and damage type.
  • a second aspect of the present application provides a pipeline damage detection device, comprising a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor, the processor executing the computer-readable instructions
  • the following steps are implemented: obtaining the pipeline inspection video to be detected; inputting the pipeline inspection video into the preset pipeline damage detection model to perform frame-by-frame detection, and outputting the detection result; if the detection result is that the current video frame exists If the pipeline is damaged, call the preset OpenCV interface to visualize the five-dimensional vector in the detection result as a detection frame; combine the detection frame with the corresponding video frame in the pipeline inspection video to obtain a pipeline marked with damage Pipeline inspection annotation video of location and damage type.
  • a third aspect of the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on a computer, the computer performs the following steps: obtaining a pipeline to be detected Inspection video; input the pipeline inspection video into the preset pipeline damage detection model to perform frame-by-frame detection, and output the detection result; if the detection result is that there is pipeline damage in the current video frame, the preset OpenCV interface is called, and the The five-dimensional vector in the detection result is visualized as a detection frame; the detection frame is combined with the corresponding video frame in the pipeline inspection video to obtain a pipeline inspection annotation video marked with the location and type of damage to the pipeline.
  • a fourth aspect of the present application provides a pipeline damage detection device, comprising: an acquisition module for acquiring a pipeline inspection video to be detected; a detection module for inputting the pipeline inspection video into a preset pipeline damage detection model for Frame-by-frame detection, and output the detection result; the visualization module is used to call the preset OpenCV interface if the detection result is that there is pipeline damage in the current video frame, and visualize the five-dimensional vector in the detection result as a detection frame; output The module is configured to combine the detection frame with the corresponding video frame in the pipeline inspection video to obtain the pipeline inspection annotation video marked with the pipeline damage location and the damage type.
  • a machine learning method is introduced to generate a model that can be used for automatic detection of pipeline images.
  • the pipeline video to be detected is input into the model for frame-by-frame detection.
  • the model can quickly detect the damage information on the image, and directly calibrate the damage location and type, and then visualize the detection results through the OpenCV interface, and the detection is completed. Users can quickly know whether there is damage, the type of damage and the specific location of the damage just by watching the calibrated video.
  • the present application builds a pipeline damage detection model for pipeline damage detection, the model has better applicability to pipeline damage detection tasks, and can greatly improve the efficiency of pipeline damage detection.
  • FIG. 1 is a schematic diagram of a first embodiment of a pipeline damage detection method in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a second embodiment of the pipeline damage detection method in the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a third embodiment of the pipeline damage detection method in the embodiment of the present application.
  • FIG. 4 is a schematic diagram of the fourth embodiment of the pipeline damage detection method in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of an embodiment of a pipeline damage detection device in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of a pipeline damage detection device in an embodiment of the present application.
  • Embodiments of the invention provide a pipeline damage detection method, device, device, and storage medium.
  • the terms “first”, “second”, “third”, “fourth”, etc. (if any) in the description and claims of this application and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that data so used may be interchanged under appropriate circumstances so that the embodiments described herein can be practiced in sequences other than those illustrated or described herein.
  • the first embodiment of the pipeline damage detection method in the embodiment of the present application includes:
  • the execution body of the present application may be a pipeline damage detection device, and may also be a terminal or a server, which is not specifically limited here.
  • the embodiments of the present application take the server as an execution subject as an example for description.
  • the pipeline inspection video captured by a camera or other equipment is used as the pipeline inspection video to be detected.
  • the pipeline damage detection model is composed of N (N>1) target detection network structures, for example: YoloV5 network structure, YoloV5 is composed of CSP network, Neck network and damage information analysis layer, and the model is composed of mainstream deep learning framework Pytorch build.
  • the detection result includes whether there is damage in the current frame, and when there is damage, a five-dimensional vector is generated from the position information and damage type information corresponding to the damage point and output as the detection result.
  • the pipeline inspection video is input into a preset pipeline damage detection model for frame-by-frame detection, and a detection result is obtained.
  • the detection result includes whether there is damage in the current frame, and when there is damage, the location information corresponding to the damage point and damage type information to generate a five-dimensional vector and output as the detection result.
  • the pipeline inspection video is input into a preset pipeline damage detection model to perform frame-by-frame detection, and the output detection result includes:
  • the CSP network divides the original input into two branches, performs convolution operations respectively to reduce the number of channels by half, and then performs the Bottleneck x N operation on branch 1, and then splices branch 1 and branch 2 with tensors, so that The input and output of the CSP network are the same size, and the CSP network allows the model to extract more features.
  • the main function of the Neck network is to perform feature fusion on the feature information extracted by the CSP network, and use the ordinary convolution operation to transfer and fuse the high-level feature information by upsampling to obtain the features for prediction.
  • Graph which strengthens the ability of network feature fusion.
  • the category information and position information are analyzed on the feature map, and the detection result is output.
  • the pipeline inspection video is input into the CSP network in the preset pipeline damage detection model to perform feature extraction frame by frame to obtain feature information
  • the feature information is input into the Neck network in the preset pipeline damage detection model to perform feature extraction.
  • the feature is fused to obtain a feature map, the category information and position information are analyzed on the feature map, and the detection result is output.
  • the detection result is damage
  • the coordinates of the damage point and the damage type information are generated into a five-dimensional vector and output.
  • the five-dimensional vector in the detection result is (c,x,y,w,h), where c is the detection frame type, x is the abscissa, y is the ordinate, w is the width, and h is the height , according to the five-dimensional vector (c, x, y, w, h) to mark the damage location and damage type in the picture.
  • OpenCV is a cross-platform computer vision and machine learning software library released under the BSD license (open source) and can run on Linux, Windows, Android and Mac OS operating systems. OpenCV is lightweight and efficient. It provides interfaces in languages such as Python, Ruby, and MATLAB, and implements multiple general algorithms in image processing and computer vision.
  • the detection frame is combined with the corresponding video frame in the pipeline inspection video to obtain a pipeline inspection annotated video marked with the location and type of damage to the pipeline.
  • Combining the detection results with the original video can effectively enable users to classify and store a large amount of video data, and archive damaged pipeline information for easy search and comparison.
  • the pipeline damage picture and the pipeline damage information are stored in association with the current video playback time point, and a CSV format file containing the pipeline damage information is output.
  • a video frame with damage information in the marked video is screenshotted, the damage information in the screenshot is extracted, the screenshot is saved, and the damage information corresponding to the screenshot is saved by calling the Panda tool to obtain the damage CSV file corresponding to the information.
  • a machine learning method is introduced to generate a model that can be used for automatic detection of pipeline images.
  • Input the model to detect frame by frame the model can quickly detect the damage information on the image, and directly calibrate the damage location and type, and then visualize the detection results through the OpenCV interface, and save the detected video. Just watch the calibrated video to quickly know whether there is damage, the type of damage, and the exact location of the damage.
  • the present application builds a pipeline damage detection model for the specific task of pipeline damage detection. The model has better applicability to the pipeline damage detection task and can greatly improve the efficiency of pipeline damage detection.
  • the second embodiment of the pipeline damage detection method in the embodiment of the present application includes:
  • the preset labelme is called to check the pipeline inspection video frame by frame.
  • the coordinates of the damage point in the image are first extracted.
  • the extraction of the damage point in the image is to convert the original image into Binary map, then find the coordinates of the connected domain at the damage, and save the coordinates of the connected domain at the damage in the image as a mat file.
  • the second is to call the preset img2json.py encoder to encode the current damaged video frame and save it as a json file.
  • the mat file and the json file are fused to generate an image marked with damage information.
  • inputting the positive sample image and the negative sample image into a preset target detection network for feature extraction, and obtaining a sample feature map includes:
  • the sample feature information is input into a preset Neck network for feature fusion to obtain a sample feature map.
  • CutMix and Mosaic technologies are also used to perform data enhancement to obtain enhanced sample images.
  • the target detection network needs to adjust the size of the original image for feature recognition, and the image in the model is scaled to 512*512.
  • the CSP network solves the gradient information repetition problem of network optimization in other large-scale convolutional neural network framework Backbone, integrates the gradient change into the feature map from beginning to end, and separates the feature map of the base layer, effectively alleviating the gradient disappearance problem, and supports feature propagation, encouraging the network to reuse features, thereby reducing the number of network parameters.
  • the Neck network is used to generate feature pyramids.
  • the feature pyramid will enhance the model's detection of objects of different scales, so that it can identify the same object of different sizes and scales, thus merging the features extracted by the CSP network to obtain feature pictures.
  • the detection speed and detection accuracy of the target detection network are perfectly coordinated, and the obtained sample feature map is more accurate.
  • the sample feature map call a preset AutoFusion algorithm to search the evaluation index for the connection part of the feature extraction layer of the target detection network, and use the target detection network corresponding to the combination with the highest evaluation index as the optimal target detection network;
  • the AutoFusion algorithm performs a spatial search on the connection part of the feature extraction layer of the target detection network structure in three steps.
  • the first is to perform the Unary ops operation to obtain op 1 and op 2 , where op 1 and op 2 are unary operation values
  • the positive sample image and the negative sample image are input into a preset target detection network for feature extraction to obtain a sample feature map; according to the sample feature map, a preset AutoFusion algorithm is invoked to detect the features of the target detection network
  • the connection part of the extraction layer is used for evaluation index search, and the target detection network corresponding to the combination with the highest evaluation index is used as the optimal target detection network; the preset Stacking integration algorithm is called to integrate the optimal target detection network, and the pipeline damage detection model is obtained.
  • the traditional computing learning method includes three parts: feature extraction, model design and parameter tuning, while the automatic machine learning AutoFusion algorithm enables the entire machine learning process to be completed automatically, and the output can be obtained only by inputting data .
  • the neural network structure search technology refers to performing a spatial search on the connection of the feature extraction layer of the target detection network through the AutoFusion algorithm, searching for the optimal combination of evaluation indicators, and taking the target detection network with the highest evaluation index as the most Optimal object detection network.
  • the AutoFusion algorithm searches for local maxima, suppresses non-maximum elements, and finds bounding boxes with relatively high confidence from the score matrix and the coordinate information of the region.
  • the confidence of all detection boxes is in descending order. Sort, then select the detection frame with the highest confidence, determine whether the detection frame with the highest confidence is correct, if it is confirmed to be the correct detection frame, calculate the IOU value of the detection frame with the highest confidence and other detection frames, according to the IOU When the IOU value is greater than the threshold, the corresponding detection frame is removed. After removing the detection frame with a high degree of overlap, the remaining detection frames continue to be sorted by confidence, until the redundant detection frames are eliminated in the middle, and the best detection frame is found. Location of damage detection.
  • the Stacking integration algorithm described in this embodiment trains a multi-layer learner structure.
  • the first layer uses N YoloV5 models to obtain the prediction results of the first layer, and combines the prediction results of the first layer into a new feature input image input learning After the YoloV5 model, the final prediction result of the pipeline damage model is obtained through the output of the second prediction process.
  • the AutoFusion algorithm is used to optimize the target network structure to obtain the optimal target network structure
  • the stacking integration algorithm is used to integrate the optimal target network structure to obtain the final pipeline damage detection model.
  • the network structure is optimized by AutoFusion optimization algorithm, so that the obtained pipeline damage detection model is more suitable for the specific task of pipeline damage detection.
  • the third embodiment of the pipeline damage detection method in the embodiment of the present application includes:
  • the AutoFusion algorithm there are three steps when using the AutoFusion algorithm to optimize the target detection network structure.
  • the first is to perform a unary operation and maintenance operation to obtain op 1 , op 2 , where op 1 and op 2 are unary operation values, and the second is to perform unary operation and maintenance operations.
  • the target detection network By selecting the highest evaluation index from the search space as the target detection network, a neural architecture with good performance is generated, and the target detection network corresponding to the highest evaluation index in the evaluation index combination is used as the optimal target detection network.
  • the detection speed of the network is increased, and the detection of impairments is more accurate.
  • the AutoFusion algorithm is used to optimize the target detection network structure, which can be optimized by itself without external assistance, and a near-optimal network structure and model for pipeline damage detection can still be obtained.
  • the fourth embodiment of the pipeline damage detection method in the embodiment of the present application includes:
  • sample feature map call a preset AutoFusion algorithm to search the evaluation index for the connection part of the feature extraction layer of the target detection network, and use the target detection network corresponding to the combination with the highest evaluation index as the optimal target detection network;
  • the stacking method is used to train a meta-model, and the model generates the final output according to the output results returned by the weak learner of the lower layer.
  • the first layer uses N YoloV5 models to obtain the prediction results of the first layer, and the prediction results of the first layer are combined into a new feature input image to input the learned YoloV5 model, and the output of the second prediction process is used as
  • the N YoloV5 models are integrated into a pipeline detection model, which can integrate the advantages of multiple network structures, so that the detection speed of the integrated pipeline damage model is faster and the accuracy rate is higher.
  • cross-validation includes two processes, one is to train the model based on the feature map; the other is to predict the feature map based on the model generated by the feature map training. After the cross-validation is completed, the predicted value of the current feature map is obtained, and the above two steps are performed twice, and finally the detection result will be generated.
  • the parameters of the optimal target detection network are adjusted by binary cross entropy until the optimal target detection network converges, and a pipeline damage detection model is obtained.
  • the Stacking integration method combines them by training a meta-model, and outputs a final prediction result according to the prediction results of different weak models, so that the framework can integrate the advantages of multiple frameworks, and has the advantage of improving the pipeline damage detection task.
  • the pipeline damage detection model obtained by Stacking integration can identify the location information and category information of damage more accurately.
  • an embodiment of the pipeline damage detection device in the embodiment of the application includes:
  • An acquisition module 501 configured to acquire a pipeline inspection video to be detected
  • the detection module 502 is configured to input the pipeline inspection video into a preset pipeline damage detection model to perform frame-by-frame detection, and output the detection result;
  • the visualization module 503 is configured to call the preset OpenCV interface if the detection result is that there is pipeline damage in the current video frame, and visualize the five-dimensional vector in the detection result as a detection frame;
  • the output module 504 is configured to combine the detection frame with the corresponding video frame in the pipeline inspection video to obtain a pipeline inspection annotated video marked with a pipeline damage location and a damage type.
  • the pipeline damage detection device further includes:
  • a labeling unit configured to obtain a plurality of pipeline inspection video samples, and perform damage information labeling on the pipeline inspection video samples frame by frame, so as to obtain a damaged positive sample image and a non-damaged negative sample image;
  • a feature extraction unit configured to input the positive sample image and the negative sample image into a preset target detection network for feature extraction to obtain a sample feature map
  • the network optimization unit is used to call the preset AutoFusion algorithm to search the evaluation index of the connection part of the feature extraction layer of the target detection network according to the sample feature map, and take the target detection network corresponding to the combination with the highest evaluation index as the optimal target detection network target detection network;
  • the model integration unit is used for invoking a preset Stacking integration algorithm to integrate the optimal target detection network to obtain a pipeline damage detection model.
  • the feature extraction unit is specifically configured to:
  • the sample feature information is input into a preset Neck network for feature fusion to obtain a sample feature map.
  • the network optimization unit is specifically configured to:
  • the target detection network corresponding to the combination with the highest evaluation index is taken as the optimal target detection network.
  • model integration unit is specifically used for:
  • the parameters of the optimal target detection network are adjusted until the optimal target detection network converges, and a pipeline damage detection model is obtained.
  • the detection module 502 is specifically configured to:
  • the category information and position information are analyzed on the feature map, and the detection result is output.
  • the pipeline damage detection device further includes:
  • the storage module is used to play the pipeline inspection and annotation video, and determine whether there is pipeline damage in the current video frame; if so, take a screenshot of the current video frame to obtain a pipeline damage picture, and extract the pipeline in the pipeline damage picture. Damage information; save the pipeline damage picture and the pipeline damage information in association with the current video playback time point, and output a CSV format file containing the pipeline damage information.
  • a machine learning method is introduced to generate a model that can be used for automatic detection of pipeline images.
  • Input the model to detect frame by frame the model can quickly detect the damage information on the image, and directly calibrate the damage location and type, and then visualize the detection results through the OpenCV interface, and save the detected video. Just watch the calibrated video to quickly know whether there is damage, the type of damage, and the exact location of the damage.
  • the present application builds a pipeline damage detection model for the specific task of pipeline damage detection. The model has better applicability to the pipeline damage detection task and can greatly improve the efficiency of pipeline damage detection.
  • FIG. 5 above describes the pipeline damage detection device in the embodiment of the present application in detail from the perspective of modular functional entities, and the following describes the pipeline damage detection device in the embodiment of the present application in detail from the perspective of hardware processing.
  • the pipeline damage detection device 600 may vary greatly due to different configurations or performances, and may include one or more central processing units. , CPU) 610 (eg, one or more processors) and memory 620, one or more storage media 630 (eg, one or more mass storage devices) storing application programs 633 or data 632. Among them, the memory 620 and the storage medium 630 may be short-term storage or persistent storage.
  • the program stored in the storage medium 630 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the pipeline damage detection apparatus 600 .
  • the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the pipeline damage detection device 600 .
  • Pipeline damage detection device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input and output interfaces 660, and/or, one or more operating systems 631, such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • operating systems 631 such as Windows Server , Mac OS X, Unix, Linux, FreeBSD and more.
  • the present application further provides a pipeline damage detection device, the pipeline damage detection device includes a memory and a processor, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor causes the processor to execute the above embodiments The steps in the pipeline damage detection method.
  • the present application also provides a pipeline damage detection device, comprising: a memory and at least one processor, wherein instructions are stored in the memory, and the memory and the at least one processor are interconnected through a line; the at least one processor calls The instructions in the memory cause the pipeline damage detection device to perform the steps in the above-mentioned pipeline damage detection method.
  • the present application also provides a computer-readable storage medium, and the computer-readable storage medium may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on the computer, the computer performs the following steps:
  • the preset OpenCV interface is called, and the five-dimensional vector in the detection result is visualized as a detection frame;
  • the detection frame is combined with the corresponding video frames in the pipeline inspection video to obtain a pipeline inspection annotation video marked with the location and type of damage to the pipeline.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

一种管道损伤检测方法、装置、设备及存储介质。方法包括:获取待检测的管道巡检视频(101);将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果(102);若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框(103);将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频(104)。该方法对管道损伤检测这一特定任务进行针对性优化,具有对管道损伤检测任务更好的适用性,大大提高了管道损伤检测的效率。

Description

管道损伤检测方法、装置、设备及存储介质
本申请要求于2020年12月02日提交中国专利局、申请号为202011385962.5、发明名称为“管道损伤检测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种管道损伤检测方法、装置、设备及存储介质。
背景技术
随着计算机技术的高速发展,计算机视觉已经成为了人工智能的重要领域,在人们生活的方方面面中起到越来越重要的作用。计算机视觉技术的应用十分广泛,采用计算机视觉技术中的目标检测方法对目标物体进行识别,可以将图片或者视频中重点的目标有效的提取出来,从而达到鉴别的效果。
发明人意识到,传统的管道检测主要依赖于人工经验,利用人工经验进行损伤鉴定十分容易出错而且检查的效率十分低,无法满足城市中大量下水管道的有效、及时的维护要求。
发明内容
本申请的主要目的在于解决目前管道损伤检测效率低的技术问题。
为实现上述目的,本申请第一方面提供了一种管道损伤检测方法,包括:获取待检测的管道巡检视频;将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请第二方面提供了一种管道损伤检测设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:获取待检测的管道巡检视频;将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如下步骤:获取待检测的管道巡检视频;将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请第四方面提供了一种管道损伤检测装置,包括:获取模块,用于获取待检测的管道巡检视频;检测模块,用于将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;可视化模块,用于若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;输出模块,用于将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请提供的技术方案中,鉴于现有依靠人工肉眼对管道进行检测不仅工作量大且容易误判或漏检,因此引入了机器学习方式生成了可用于对管道图像进行自动检测的模型,将待检测的管道视频输入该模型逐帧进行检测,所述模型可以实现快速对图像上的损伤信 息进行检测,并且直接标定出损伤位置和类型,然后通过OpenCV接口将检测结果可视化,并将检测完毕的视频保存,用户只需观看标定好的视频即可快速获知是否有损伤、损伤类型以及损伤具体位置。本申请针对管道损伤检测构建了管道损伤检测模型,该模型具有对管道损伤检测任务更好的适用性,可以大大提高管道损伤检测的效率。
附图说明
图1为本申请实施例中管道损伤检测方法的第一个实施例示意图;
图2为本申请实施例中管道损伤检测方法的第二个实施例示意图;
图3为本申请实施例中管道损伤检测方法的第三个实施例示意图;
图4为本申请实施例中管道损伤检测方法的第四个实施例示意图;
图5为本申请实施例中管道损伤检测装置的一个实施例示意图;
图6为本申请实施例中管道损伤检测设备的一个实施例示意图。
具体实施方式
发明实施例提供了一种管道损伤检测方法、装置、设备及存储介质。本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中管道损伤检测方法的第一个实施例包括:
101、获取待检测的管道巡检视频;
可以理解的是,本申请的执行主体可以为管道损伤检测装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。
本实施例中,通过摄像机或其他设备拍摄的管道巡检视频,将拍摄得到的视频作为待检测的管道巡检视频。
102、将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
本实施例中,所述管道损伤检测模型由N个(N>1)目标检测网络结构,例如:YoloV5网络结构,YoloV5由CSP网络、Neck网络和损伤信息分析层构成,模型由主流深度学习框架Pytorch构建。所述检测结果包括当前帧中有无损伤,有损伤时将损伤点对应的位置信息和损伤种类信息生成一个五维向量并作为检测结果输出。
本实施例中,将所述管道巡检视频输入预置管道损伤检测模型逐帧进行检测,得到检测结果,所述检测结果包括当前帧中有无损伤,有损伤时将损伤点对应的位置信息和损伤种类信息生成一个五维向量并作为检测结果输出。
可选的,在一实施例中,所述将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果包括:
将所述管道巡检视频输入预置管道损伤检测模型中的CSP网络逐帧进行特征提取,得到特征信息;
本实施例中,所述CSP网络是将原输入分成两个分支,分别进行卷积操作使得通道数减半,然后分支一进行Bottlenneck x N操作,随后张量拼接分支一和分支二,从而使得所述CSP网络的输入与输出是一样的大小,所述CSP网络可以让模型提取到更多的特征。
将所述特征信息输入预置管道损伤检测模型中的Neck网络进行特征融合,得到特征 图;
本实施例中,所述Neck网络的主要作用是对CSP网络提取得到的特征信息进行特征融合,采用普通卷积操作,将高层的特征信息通过上采样的方式进行传递融合,得到进行预测的特征图,加强了网络特征融合的能力。
对所述特征图进行类别信息和位置信息分析,输出检测结果。
本实施例中,将所述管道巡检视频输入预置管道损伤检测模型中的CSP网络逐帧进行特征提取,得到特征信息,将所述特征信息输入预置管道损伤检测模型中的Neck网络进行特征融合,得到特征图,对所述特征图进行类别信息和位置信息分析,输出检测结果。本实施例中,若检测的视频帧中有损伤信息,则检测结果为有损伤,并将损伤点的坐标和损伤种类信息生成一个五维向量并输出。
103、若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
本实施例中,检测结果中的五维向量为(c,x,y,w,h),其中,c为检测框类别,x为横坐标,y为纵坐标,w为宽,h为高,根据所述五维向量(c,x,y,w,h)标注出损伤在图片中位置以及损伤种类。
本实施例中,若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框。OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS操作系统上。OpenCV为轻量级而且高效,提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的多个通用算法。
104、将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本实施例中,将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。将检测框和原视频中对于的视频帧结合通过OpenCV接口生成一个新视频,得到包含有标注信息的标注视频。将检测结果和原视频结合可以有效的使用户在大量视频数据中进行归类和储存,对有损伤的管道信息进行归档,方便查找和比对。
可选的,在一实施例中,在所述将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频之后,还包括:
播放所述管道巡检标注视频,并判断当前视频帧中是否存在管道损伤;
若是,则对当前视频帧进行截图,得到管道损伤图片,并提取所述管道损伤图片中的管道损伤信息;
将所述管道损伤图片、所述管道损伤信息与当前视频播放时间点关联保存,并输出包含所述管道损伤信息的CSV格式文件。
本实施例中,将标注视频中的带有损伤信息的视频帧进行截图,并提取所述截图中的损伤信息,将截图保存并将截图对应的损伤信息调用Panda工具进行保存,得到所述损伤信息对应的CSV文件。
本实施例鉴于现有依靠人工肉眼对管道进行检测不仅工作量大且容易误判或漏检,因此引入了机器学习方式生成了可用于对管道图像进行自动检测的模型,将待检测的管道视频输入该模型逐帧进行检测,所述模型可以实现快速对图像上的损伤信息进行检测,并且直接标定出损伤位置和类型,然后通过OpenCV接口将检测结果可视化,并将检测完毕的视频保存,用户只需观看标定好的视频即可快速获知是否有损伤、损伤类型以及损伤具体位置。本申请针对管道损伤检测这一特定任务构建了管道损伤检测模型,该模型具有对管道 损伤检测任务更好的适用性,可以大大提高管道损伤检测的效率。
请参阅图2,本申请实施例中管道损伤检测方法的第二个实施例包括:
201、获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
本实施例中,调用预置labelme对管道巡检视频视频逐帧进行排查,当视频帧中有损伤时,首先提取图像中损伤点的坐标,图像中损伤点的提取是将原图现转化为二值图,然后找到损伤处连通域的坐标,并将图像中损伤处连通域对应的坐标保存为mat文件。其次是调用预置img2json.py编码器,将当前有损伤的视频帧编码并保存为json文件。再采用预置imitate_json.py融合算法,将所述mat文件和所述json文件融合并生成标注有损伤信息的图像。
202、将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
可选的,在一实施例中,所述将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图包括:
将所述正样本图像和负样本图像输入预置输入层进行数据增强,得到增强样本图片;
对所述增强样本图片进行尺寸大小缩放和裁剪,得到标准样本图片;
将所述标准样本图片输入预置CSP网络进行特征提取,得到样本特征信息;
将所述样本特征信息输入预置Neck网络进行特征融合,得到样本特征图。
本实施例中,对于正样本图像和负样本图像,除了经典的几何畸变与光照畸变外,还使用了CutMix与Mosaic技术来进行数据增强,得到增强样本图像。目标检测网络需要调整原始图像的尺寸进行特征识别,模型中的图像缩放到512*512。CSP网络解决了其他大型卷积神经网络框架Backbone中网络优化的梯度信息重复问题,将梯度的变化从头到尾地集成到特征图中,将基础层的特征映射图分离出来,有效缓解了梯度消失问题,并且支持特征传播,鼓励网络重用特征,从而减少网络参数数量。Neck网络用于生成特征金字塔。特征金字塔会增强模型对于不同缩放尺度对象的检测,从而能够识别不同大小和尺度的同一个物体,从而融合了CSP网络提取的特征,得到特征图片。目标检测网络的检测速度和检测精度达到完美的协调,得到的样本特征图更加具有准确性。
203、根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
本实施例中,AutoFusion算法对目标检测网络结构的特征提取层连接部分进行空间搜索共有三个步骤,首先是进行Unary ops运算,得到op 1、op 2,op 1、op 2为一元运算值,其次是对一元运算值进行幅度函数运算,得到μ 1、μ 2,μ 1、μ 2为操作数,以及最后对这两个步骤的综合运算Δw=λ*b(μ 1(op 1),u 2(op 2)),其中,Δw为评价指标,λ为自定义参数,b为加法二元函数运算,Δw的值为最高时,对应的目标检测网络即为最优目标检测网络。
204、调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型;
本实施例中,将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型。
本实施例中,传统的计算学习方式包括特征提取,模型设计以及参数调优三大部分, 而自动机器学习AutoFusion算法,使整个机器学习的过程是自动完成的,只需要输入数据就可以得到输出。本实施例中,神经网络结构搜索技术,是指通过AutoFusion算法对目标检测网络的特征提取层连接处进行空间搜索,搜索寻找最优的评价指标组合,把评价指标最高时的目标检测网络作为最优目标检测网络。
本实施例中,AutoFusion算法是搜索局部极大值,抑制非极大值元素,根据score矩阵和region的坐标信息,从中找到置信度比较高的bounding box,首先是对所有检测框的置信度降序排序,然后选出置信度最高的检测框,判断所述置信度最高的检测框是否正确,若确认其为正确的检测框,计算置信度最高的检测框与其他检测框的IOU值,根据IOU值去除重叠度高的,当IOU值大于threshold就去除对应检测框,去除重叠度高的检测框后剩下的检测框继续进行置信度的排序,直到中消除多余的检测框,找到最佳的损伤检测的位置。
本实施例所述Stacking集成算法训练一个多层的学习器结构,第一层用N个YoloV5模型,得到第一层的预测结果,将第一层的预测结果合并为新的特征输入图像输入学习后的YoloV5模型,通过第二次预测过程的输出得到管道损伤模型最终的预测结果。
205、获取待检测的管道巡检视频;
206、将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
207、若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
208、将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请实施例中,采用AutoFusion算法对目标网络结构进行优化,得到最优目标网络结构,对所述最优目标网络结构采用Stacking集成算法进行集成,得到最终的管道损伤检测模型。采用AutoFusion优化算法对网络结构进行优化,使得到的管道损伤检测模型更加适用于管道损伤检测这一特定任务。
请参阅图3,本申请实施例中管道损伤检测方法的第三个实施例包括:
301、获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
302、将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
303、调用预置AutoFusion算法,对所述目标检测网络的特征提取层连接部分进行一元运维运算,得到一元运算值;
304、将所述一元运算值输入预置操作层进行幅度函数运算,得到操作数;
305、将所述一元运算值和所述操作数进行组合,得到评价指标的组合;
306、将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
本实施例中,采用AutoFusion算法对目标检测网络结构进行优化时共有三个步骤,首先是进行一元运维运算,得到op 1、op 2,op 1、op 2为一元运算值,其次是对一元运算值进行幅度函数运算,得到μ 1、μ 2,μ 1、μ 2为操作数,最后是对这两个步骤的综合得到评价指标组合以及最后对这两个步骤的综合运算Δw=λ*b(μ 1(op 1),u 2(op 2)),其中,Δw为评价指标,λ为自定义参数,b为加法二元函数运算,Δw的值为最高时,对应的目标检测网络即为最优目标检测网络。通过从搜索空间中选择评价指标最高的作为目标检测网络,从而生成性能良好的神经架构,将所述评价指标组合中评价指标最高时对应的目标检测网络作为最优目标检测网络,最优目标检测网络的检测速度提高,而且对损伤的检测更加准确。
307、获取待检测的管道巡检视频;
308、将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
309、若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
310、将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请实施例中,采用AutoFusion算法对目标检测网络结构进行优化,可以在没有外界的辅助下,自行进行优化并且仍然能得到一个接近最优的针对管道损伤检测的网络架构和模型。
请参阅图4,本申请实施例中管道损伤检测方法的第四个实施例包括:
401、获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
402、将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
403、根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
404、调用预置Stacking集成算法,将所述样本特征图输入所述最优目标检测网络进行集成运算,得到第一层元特征;
405、对所述第一层元特征求平均值并将所述平均值输入所述最优目标检测网络进行集成运算,得到第二层元特征;
406、根据所述第二层元特征,对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型;
本实施例中,利用Stacking方法训练一个元模型,该模型根据较低层的弱学习器返回的输出结果生成最后的输出。Stacking方法中第一层用N个YoloV5模型,得到第一层的预测结果,将第一层的预测结果合并为新的特征输入图像输入学习后的YoloV5模型,通过第二次预测过程的输出作为系统最终的检测结果,将所述N个YoloV5模型集成为一个管道检测模型,可以综合多个网络结构的优点,使得集成后的管道损伤模型的检测速度更快,准确率更高。用YoloV5作为基础模型做交叉验证,交叉验证包含两个过程,一是基于特征图训练模型;二是基于特征图训练生成的模型对特征图进行预测。在交叉验证完成之后得到关于当前特征图的预测值,将上述两个步骤进行两次,最终会生成检测结果。采用二元交叉熵对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型。
407、获取待检测的管道巡检视频;
408、将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
409、若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
410、将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
本申请实施例中,Stacking集成方法通过训练一个元模型将它们组合起来,根据不同弱模型的预测结果输出一个最终的预测结果,使框架可以整合多种框架的优势,具有对管道损伤检测任务更好的适用性,Stacking集成得到的管道损伤检测模型对损伤的位置信息和类别信息的识别更准确。
上面对本申请实施例中管道损伤检测方法进行了描述,下面对本申请实施例中管道损 伤检测装置进行描述,请参阅图5,本申请实施例中管道损伤检测装置一个实施例包括:
获取模块501,用于获取待检测的管道巡检视频;
检测模块502,用于将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
可视化模块503,用于若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
输出模块504,用于将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
可选的,在一实施例中,所述管道损伤检测装置还包括:
标注单元,用于获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
特征提取单元,用于将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
网络优化单元,用于根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
模型集成单元,用于调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型。
可选的,在一实施例中,所述特征提取单元具体用于:
将所述正样本图像和负样本图像输入预置输入层进行数据增强,得到增强样本图片;
对所述增强样本图片进行尺寸大小缩放和裁剪,得到标准样本图片;
将所述标准样本图片输入预置CSP网络进行特征提取,得到样本特征信息;
将所述样本特征信息输入预置Neck网络进行特征融合,得到样本特征图。
可选的,在一实施例中,所述网络优化单元具体用于:
调用预置AutoFusion算法,对所述目标检测网络的特征提取层连接部分进行一元运维运算,得到一元运算值;
将所述一元运算值输入预置操作层进行幅度函数运算,得到操作数;
将所述一元运算值和所述操作数进行组合,得到评价指标的组合;
将评价指标最高的组合对应的目标检测网络作为最优目标检测网络。
可选的,在一实施例中,所述模型集成单元具体用于:
调用预置Stacking集成算法,将所述样本特征图输入所述最优目标检测网络进行集成运算,得到第一层元特征;
对所述第一层元特征求平均值并将所述平均值输入所述最优目标检测网络进行集成运算,得到第二层元特征;
根据所述第二层元特征,对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型。
可选的,在一实施例中,所述检测模块502具体用于:
将所述管道巡检视频输入预置管道损伤检测模型中的CSP网络逐帧进行特征提取,得到特征信息;
将所述特征信息输入预置管道损伤检测模型中的Neck网络进行特征融合,得到特征图;
对所述特征图进行类别信息和位置信息分析,输出检测结果。
可选的,在一实施例中,所述管道损伤检测装置还包括:
储存模块,用于播放所述管道巡检标注视频,并判断当前视频帧中是否存在管道损伤;若是,则对当前视频帧进行截图,得到管道损伤图片,并提取所述管道损伤图片中的管道损伤信息;将所述管道损伤图片、所述管道损伤信息与当前视频播放时间点关联保存,并输出包含所述管道损伤信息的CSV格式文件。
本实施例鉴于现有依靠人工肉眼对管道进行检测不仅工作量大且容易误判或漏检,因此引入了机器学习方式生成了可用于对管道图像进行自动检测的模型,将待检测的管道视频输入该模型逐帧进行检测,所述模型可以实现快速对图像上的损伤信息进行检测,并且直接标定出损伤位置和类型,然后通过OpenCV接口将检测结果可视化,并将检测完毕的视频保存,用户只需观看标定好的视频即可快速获知是否有损伤、损伤类型以及损伤具体位置。本申请针对管道损伤检测这一特定任务构建了管道损伤检测模型,该模型具有对管道损伤检测任务更好的适用性,可以大大提高管道损伤检测的效率。
上面图5从模块化功能实体的角度对本申请实施例中的管道损伤检测装置进行详细描述,下面从硬件处理的角度对本申请实施例中管道损伤检测设备进行详细描述。
图6是本申请实施例提供的一种管道损伤检测设备的结构示意图,该管道损伤检测设备600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)610(例如,一个或一个以上处理器)和存储器620,一个或一个以上存储应用程序633或数据632的存储介质630(例如一个或一个以上海量存储设备)。其中,存储器620和存储介质630可以是短暂存储或持久存储。存储在存储介质630的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对管道损伤检测设备600中的一系列指令操作。更进一步地,处理器610可以设置为与存储介质630通信,在管道损伤检测设备600上执行存储介质630中的一系列指令操作。
管道损伤检测设备600还可以包括一个或一个以上电源640,一个或一个以上有线或无线网络接口650,一个或一个以上输入输出接口660,和/或,一个或一个以上操作系统631,例如Windows Serve,Mac OS X,Unix,Linux,FreeBSD等等。本领域技术人员可以理解,图6示出的管道损伤检测设备结构并不构成对管道损伤检测设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。本申请还提供一种管道损伤检测设备,所述管道损伤检测设备包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述各实施例中的所述管道损伤检测方法的步骤。
本申请还提供一种管道损伤检测设备,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互连;所述至少一个处理器调用所述存储器中的所述指令,以使得所述管道损伤检测设备执行上述管道损伤检测方法中的步骤。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,也可以为易失性计算机可读存储介质。计算机可读存储介质存储有计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如下步骤:
获取待检测的管道巡检视频;
将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种管道损伤检测方法,包括:
    获取待检测的管道巡检视频;
    将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
    若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
    将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
  2. 根据权利要求1所述的管道损伤检测方法,其中,在所述获取待检测的下水管道视频之前,还包括:
    获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
    将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
    根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
    调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型。
  3. 根据权利要求2所述的管道损伤检测方法,其中,所述将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图包括:
    将所述正样本图像和负样本图像输入预置输入层进行数据增强,得到增强样本图片;
    对所述增强样本图片进行尺寸大小缩放和裁剪,得到标准样本图片;
    将所述标准样本图片输入预置CSP网络进行特征提取,得到样本特征信息;
    将所述样本特征信息输入预置Neck网络进行特征融合,得到样本特征图。
  4. 根据权利要求2所述的管道损伤检测方法,其中,所述根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络包括:
    调用预置AutoFusion算法,对所述目标检测网络的特征提取层连接部分进行一元运维运算,得到一元运算值;
    将所述一元运算值输入预置操作层进行幅度函数运算,得到操作数;
    将所述一元运算值和所述操作数进行组合,得到评价指标的组合;
    将评价指标最高的组合对应的目标检测网络作为最优目标检测网络。
  5. 根据权利要求2所述的管道损伤检测方法,其中,所述调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型包括:
    调用预置Stacking集成算法,将所述样本特征图输入所述最优目标检测网络进行集成运算,得到第一层元特征;
    对所述第一层元特征求平均值并将所述平均值输入所述最优目标检测网络进行集成运算,得到第二层元特征;
    根据所述第二层元特征,对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型。
  6. 根据权利要求3所述的管道损伤检测方法,其中,所述将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果包括:
    将所述管道巡检视频输入预置管道损伤检测模型中的CSP网络逐帧进行特征提取,得到特征信息;
    将所述特征信息输入预置管道损伤检测模型中的Neck网络进行特征融合,得到特征图;
    对所述特征图进行类别信息和位置信息分析,输出检测结果。
  7. 根据权利要求1-6中任一项所述的管道损伤检测方法,其中,在所述将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频之后,还包括:
    播放所述管道巡检标注视频,并判断当前视频帧中是否存在管道损伤;
    若是,则对当前视频帧进行截图,得到管道损伤图片,并提取所述管道损伤图片中的管道损伤信息;
    将所述管道损伤图片、所述管道损伤信息与当前视频播放时间点关联保存,并输出包含所述管道损伤信息的CSV格式文件。
  8. 一种管道损伤检测设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待检测的管道巡检视频;
    将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
    若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
    将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
  9. 根据权利要求8所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
    将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
    根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
    调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型。
  10. 根据权利要求9所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    将所述正样本图像和负样本图像输入预置输入层进行数据增强,得到增强样本图片;
    对所述增强样本图片进行尺寸大小缩放和裁剪,得到标准样本图片;
    将所述标准样本图片输入预置CSP网络进行特征提取,得到样本特征信息;
    将所述样本特征信息输入预置Neck网络进行特征融合,得到样本特征图。
  11. 根据权利要求9所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    调用预置AutoFusion算法,对所述目标检测网络的特征提取层连接部分进行一元运维运算,得到一元运算值;
    将所述一元运算值输入预置操作层进行幅度函数运算,得到操作数;
    将所述一元运算值和所述操作数进行组合,得到评价指标的组合;
    将评价指标最高的组合对应的目标检测网络作为最优目标检测网络。
  12. 根据权利要求9所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    调用预置Stacking集成算法,将所述样本特征图输入所述最优目标检测网络进行集成运算,得到第一层元特征;
    对所述第一层元特征求平均值并将所述平均值输入所述最优目标检测网络进行集成运算,得到第二层元特征;
    根据所述第二层元特征,对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型。
  13. 根据权利要求10所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    将所述管道巡检视频输入预置管道损伤检测模型中的CSP网络逐帧进行特征提取,得到特征信息;
    将所述特征信息输入预置管道损伤检测模型中的Neck网络进行特征融合,得到特征图;
    对所述特征图进行类别信息和位置信息分析,输出检测结果。
  14. 根据权利要求8-13中任一项所述的管道损伤检测设备,所述处理器执行所述计算机程序时还实现以下步骤:
    播放所述管道巡检标注视频,并判断当前视频帧中是否存在管道损伤;
    若是,则对当前视频帧进行截图,得到管道损伤图片,并提取所述管道损伤图片中的管道损伤信息;
    将所述管道损伤图片、所述管道损伤信息与当前视频播放时间点关联保存,并输出包含所述管道损伤信息的CSV格式文件。
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如下步骤:
    获取待检测的管道巡检视频;
    将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
    若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
    将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
  16. 根据权利要求15所述的计算机可读存储介质,当所述计算机指令在计算机上运行时,使得计算机还执行以下步骤:
    获取多个管道巡检视频样本,并对所述管道巡检视频样本逐帧进行损伤信息标注,得到有损伤的正样本图像和无损伤的负样本图像;
    将所述正样本图像和负样本图像输入预置目标检测网络进行特征提取,得到样本特征图;
    根据所述样本特征图,调用预置AutoFusion算法对所述目标检测网络的特征提取层连接部分进行评价指标搜索,并将评价指标最高的组合对应的目标检测网络作为最优目标检测网络;
    调用预置Stacking集成算法对所述最优目标检测网络进行集成,得到管道损伤检测模型。
  17. 根据权利要求16所述的计算机可读存储介质,当所述计算机指令在计算机上运行时,使得计算机还执行以下步骤:
    将所述正样本图像和负样本图像输入预置输入层进行数据增强,得到增强样本图片;
    对所述增强样本图片进行尺寸大小缩放和裁剪,得到标准样本图片;
    将所述标准样本图片输入预置CSP网络进行特征提取,得到样本特征信息;
    将所述样本特征信息输入预置Neck网络进行特征融合,得到样本特征图。
  18. 根据权利要求16所述的计算机可读存储介质,当所述计算机指令在计算机上运行时,使得计算机还执行以下步骤:
    调用预置AutoFusion算法,对所述目标检测网络的特征提取层连接部分进行一元运维运算,得到一元运算值;
    将所述一元运算值输入预置操作层进行幅度函数运算,得到操作数;
    将所述一元运算值和所述操作数进行组合,得到评价指标的组合;
    将评价指标最高的组合对应的目标检测网络作为最优目标检测网络。
  19. 根据权利要求16所述的计算机可读存储介质,当所述计算机指令在计算机上运行时,使得计算机还执行以下步骤:
    调用预置Stacking集成算法,将所述样本特征图输入所述最优目标检测网络进行集成运算,得到第一层元特征;
    对所述第一层元特征求平均值并将所述平均值输入所述最优目标检测网络进行集成运算,得到第二层元特征;
    根据所述第二层元特征,对所述最优目标检测网络进行参数调节,直至所述最优目标检测网络收敛,得到管道损伤检测模型。
  20. 一种管道损伤检测装置,所述管道损伤检测装置包括:
    获取模块,用于获取待检测的管道巡检视频;
    检测模块,用于将所述管道巡检视频输入预置管道损伤检测模型进行逐帧检测,输出检测结果;
    可视化模块,用于若所述检测结果为当前视频帧中存在管道损伤,则调用预置OpenCV接口,将所述检测结果中的五维向量可视化为检测框;
    输出模块,用于将所述检测框与所述管道巡检视频中的对应视频帧相结合,得到标注有管道损伤位置和损伤种类的管道巡检标注视频。
PCT/CN2021/083573 2020-12-02 2021-03-29 管道损伤检测方法、装置、设备及存储介质 WO2022116433A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011385962.5 2020-12-02
CN202011385962.5A CN112446870B (zh) 2020-12-02 2020-12-02 管道损伤检测方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022116433A1 true WO2022116433A1 (zh) 2022-06-09

Family

ID=74740456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083573 WO2022116433A1 (zh) 2020-12-02 2021-03-29 管道损伤检测方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112446870B (zh)
WO (1) WO2022116433A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116123988A (zh) * 2023-04-14 2023-05-16 国家石油天然气管网集团有限公司 一种基于数据处理的管道几何变形检测方法及系统
CN116382224A (zh) * 2023-06-05 2023-07-04 云印技术(深圳)有限公司 一种基于数据分析的包装设备监测方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446870B (zh) * 2020-12-02 2024-07-09 平安科技(深圳)有限公司 管道损伤检测方法、装置、设备及存储介质
CN112966678B (zh) * 2021-03-11 2023-01-24 南昌航空大学 一种文本检测方法及系统
CN113362285B (zh) * 2021-05-21 2023-02-07 同济大学 一种钢轨表面伤损细粒度图像分类与检测方法
CN114429639B (zh) * 2022-01-27 2024-05-03 广联达科技股份有限公司 一种管线标注识别方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815997A (zh) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 基于深度学习的识别车辆损伤的方法和相关装置
CN110264444A (zh) * 2019-05-27 2019-09-20 阿里巴巴集团控股有限公司 基于弱分割的损伤检测方法及装置
CN110390666A (zh) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 道路损伤检测方法、装置、计算机设备及存储介质
CN110490079A (zh) * 2019-07-19 2019-11-22 万翼科技有限公司 巡检数据处理方法、装置、计算机设备和存储介质
CN112446870A (zh) * 2020-12-02 2021-03-05 平安科技(深圳)有限公司 管道损伤检测方法、装置、设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287798B (zh) * 2019-05-27 2023-04-18 魏运 基于特征模块化和上下文融合的矢量网络行人检测方法
CN111667011B (zh) * 2020-06-08 2023-07-14 平安科技(深圳)有限公司 损伤检测模型训练、车损检测方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815997A (zh) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 基于深度学习的识别车辆损伤的方法和相关装置
CN110264444A (zh) * 2019-05-27 2019-09-20 阿里巴巴集团控股有限公司 基于弱分割的损伤检测方法及装置
CN110390666A (zh) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 道路损伤检测方法、装置、计算机设备及存储介质
CN110490079A (zh) * 2019-07-19 2019-11-22 万翼科技有限公司 巡检数据处理方法、装置、计算机设备和存储介质
CN112446870A (zh) * 2020-12-02 2021-03-05 平安科技(深圳)有限公司 管道损伤检测方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, YUWEI: "Research on Classification and Intelligent Defects Detection of Endoscopic Images in City Sewer", CHINESE MASTER'S THESES FULL-TEXT DATABASE, ENGINEERING SCIENCE & TECHNOLOGY II, 15 February 2020 (2020-02-15), XP055936283, [retrieved on 20220628] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116123988A (zh) * 2023-04-14 2023-05-16 国家石油天然气管网集团有限公司 一种基于数据处理的管道几何变形检测方法及系统
CN116382224A (zh) * 2023-06-05 2023-07-04 云印技术(深圳)有限公司 一种基于数据分析的包装设备监测方法及系统
CN116382224B (zh) * 2023-06-05 2023-08-04 云印技术(深圳)有限公司 一种基于数据分析的包装设备监测方法及系统

Also Published As

Publication number Publication date
CN112446870A (zh) 2021-03-05
CN112446870B (zh) 2024-07-09

Similar Documents

Publication Publication Date Title
WO2022116433A1 (zh) 管道损伤检测方法、装置、设备及存储介质
WO2020199931A1 (zh) 人脸关键点检测方法及装置、存储介质和电子设备
Nguyen et al. Yolo based real-time human detection for smart video surveillance at the edge
CN110956126A (zh) 一种联合超分辨率重建的小目标检测方法
CN111325347A (zh) 基于可解释视觉推理模型的危险预警描述自动生成方法
WO2022227770A1 (zh) 目标对象检测模型的训练方法、目标对象检测方法和设备
CN107301376B (zh) 一种基于深度学习多层刺激的行人检测方法
CN113487610B (zh) 疱疹图像识别方法、装置、计算机设备和存储介质
Li et al. A robust real‐time method for identifying hydraulic tunnel structural defects using deep learning and computer vision
Jiao et al. Vehicle wheel weld detection based on improved YOLO v4 algorithm
Zhang et al. Underwater target detection algorithm based on improved YOLOv4 with SemiDSConv and FIoU loss function
CN117789160A (zh) 一种基于聚类优化的多模态融合目标检测方法及系统
CN116543295A (zh) 一种基于退化图像增强的轻量化水下目标检测方法及系统
Huang et al. Estimating 6d object poses with temporal motion reasoning for robot grasping in cluttered scenes
CN113947771B (zh) 图像识别方法、装置、设备、存储介质以及程序产品
CN115909408A (zh) 一种基于Transformer网络的行人重识别方法及装置
Chen et al. Poker Watcher: Playing Card Detection Based on EfficientDet and Sandglass Block
Pan et al. Intelligent recognition of automatic production line of metal sodium rod
Zhang et al. Research on text location and recognition in natural images with deep learning
CN117764969B (zh) 轻量化多尺度特征融合缺陷检测方法
Barbosa et al. Automatic analogue gauge reading using smartphones for industrial scenarios
CN116935477B (zh) 一种基于联合注意力的多分支级联的人脸检测方法及装置
CN118279575B (zh) 火源检测方法、装置、电子设备及存储介质
Li et al. Building Recognition of Aerial Images Based on Improved Unet Network
CN113420664B (zh) 基于图像的安全隐患检测方法及装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21899477

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21899477

Country of ref document: EP

Kind code of ref document: A1