CN116416208A - Pipeline defect detection method and device, electronic equipment and storage medium - Google Patents

Pipeline defect detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116416208A
CN116416208A CN202310025186.5A CN202310025186A CN116416208A CN 116416208 A CN116416208 A CN 116416208A CN 202310025186 A CN202310025186 A CN 202310025186A CN 116416208 A CN116416208 A CN 116416208A
Authority
CN
China
Prior art keywords
defect
frame
image
pipeline
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310025186.5A
Other languages
Chinese (zh)
Inventor
张轩
王亚立
乔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202310025186.5A priority Critical patent/CN116416208A/en
Publication of CN116416208A publication Critical patent/CN116416208A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The embodiment of the application provides a pipeline defect detection method, a pipeline defect detection device, electronic equipment and a storage medium, and relates to the field of target detection. Wherein the method comprises the following steps: acquiring a pipeline video, and sampling the pipeline video to obtain a plurality of image frames to be detected; performing frame-level defect detection on a plurality of image frames to be detected respectively to obtain detection results of the image frames; the detection result is used for indicating whether a pipeline is defect-free, whether an image shooting scene is in the pipeline, at least one of defect type and confidence level thereof, defect level of the defect type, defect mask and confidence level thereof; according to the detection result of each image frame, screening and/or de-duplication processing is carried out on each image frame to obtain a key frame; and outputting the detection result of the key frame. The method and the device solve the problem that in the related art, time and labor are wasted in searching the defective key frames from the pipeline video.

Description

Pipeline defect detection method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of target detection, in particular to a pipeline defect detection method, a pipeline defect detection device, electronic equipment and a storage medium.
Background
The underground pipe network system is called as an important infrastructure in urban construction development, and in order to ensure normal operation of the underground pipe network system, regular detection is needed, and repair can be performed in time after defects are detected.
The prior art generally collects a large amount of pipeline video according to a pipeline QV (Quick-View) detection and a pipeline CCTV (Closed Circuit Television Inspection, closed-circuit television detection), and then analyzes the pipeline video for defects and damages via a detector, wherein a defect or damage is identified as being marked on a certain frame in the pipeline video to represent the pipeline defect or damage over the time period, and accordingly, the certain frame becomes a key frame with the defect or damage. However, finding a defective or corrupted key frame from a large number of pipeline videos is quite time consuming and labor intensive.
From the above, the problem of how to automatically, quickly and accurately find out the key frames with defects or damages from the pipeline video remains to be solved.
Disclosure of Invention
The embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for detecting a pipeline defect, which can solve the problem that searching for a defective or damaged key frame from a pipeline video in the related art is time-consuming and laborious. The technical scheme is as follows:
According to one aspect of the embodiments of the present application, a method for detecting a pipe defect includes: acquiring a pipeline video, and sampling the pipeline video to obtain a plurality of image frames to be detected; performing frame-level defect detection on a plurality of image frames to be detected respectively to obtain detection results of the image frames; the detection result is used for indicating whether a pipeline is defect-free, whether an image shooting scene is in the pipeline, at least one of defect type and confidence level thereof, defect level of the defect type, defect mask and confidence level thereof; according to the detection result of each image frame, screening and/or de-duplication processing is carried out on each image frame to obtain a key frame; and outputting the detection result of the key frame.
According to one aspect of an embodiment of the present application, a pipe defect detection apparatus, the apparatus includes: the image acquisition module is used for acquiring a pipeline video and sampling the pipeline video to obtain a plurality of image frames to be detected; the defect detection module is used for respectively carrying out frame-level defect detection on a plurality of image frames to be detected to obtain detection results of the image frames; the detection result is used for indicating whether a pipeline is defect-free, whether an image shooting scene is in the pipeline, at least one of defect type and confidence level thereof, defect level of the defect type, defect mask and confidence level thereof; the screening and de-duplication module is used for carrying out screening treatment and/or de-duplication treatment on each image frame according to the detection result of each image frame to obtain a key frame; and the result output module is used for outputting the detection result of the key frame.
In an exemplary embodiment, the image acquisition module includes: a frame rate acquisition unit configured to acquire a frame rate matching the selected input mode in response to a selection operation performed for the different input modes; and the sampling unit is used for sampling the image frames in the pipeline video according to the acquired frame rate to obtain a plurality of image frames to be detected.
In an exemplary embodiment, the defect detection module includes: the characteristic extraction unit is used for respectively carrying out multi-dimensional characteristic extraction on a plurality of image frames to be detected to obtain multi-dimensional characteristic information of each image frame; and the category prediction unit is used for predicting the category of the pipeline defect of each image frame according to the multidimensional characteristic information of each image frame to obtain a detection result of each image frame.
In an exemplary embodiment, the frame-level defect detection is implemented based on a defect detection model that is a trained, deep-learning model with the ability to perform frame-level defect detection on image frames.
In an exemplary embodiment, the filtering deduplication module comprises: a first determining unit configured to determine, for each of the image frames, a confidence level of each defect class and a confidence level of each defect mask based on a detection result of the image frame, and determine, based on the confidence level of each defect class and the confidence level of each defect mask, a number of defect masks, a maximum confidence level of a defect mask, a maximum confidence level of a defect class, and a total confidence level of all defect classes; the first screening unit is used for taking the image frame as a key frame if the detection result of the image frame indicates that the pipeline has defects and the maximum confidence coefficient of the defect type is larger than a first confidence coefficient threshold value; a second screening unit, configured to take the image frame as a key frame if the number of defect masks of the image frame is greater than a set number and the maximum confidence level of the defect masks is greater than a second confidence level threshold; and the third screening unit is used for taking the image frame as a key frame if the total confidence coefficient of all defect categories in the image frame is larger than a third confidence coefficient threshold value.
In an exemplary embodiment, the filtering deduplication module comprises: an image traversing unit, configured to traverse each of the image frames, with the traversed image frame being a current one of the image frames; a second determining unit, configured to determine a defect type and a confidence level of a current image frame based on a detection result of the current image frame; and the image retaining unit is used for retaining the image frames if the interval time between the current image frame and the previous image frame is smaller than the set time and the confidence coefficient of the current image frame and the previous image frame is larger than the defect type of the fourth confidence coefficient threshold value.
In an exemplary embodiment, the result output module includes: the image output unit is used for marking the corresponding detection result in the key frame and outputting the marked key frame; the video output unit is used for marking the key frames in the pipeline video based on the detection results of the key frames and outputting the marked pipeline video; and the text output unit is used for outputting the detection result of the key frame in a text mode.
According to one aspect of an embodiment of the present application, an electronic device includes: at least one processor, at least one memory, and at least one communication bus, wherein the memory stores computer programs, and the processor reads the computer programs in the memory through the communication bus; the computer program, when executed by a processor, implements a pipeline defect detection method as described above.
According to one aspect of an embodiment of the present application, a storage medium has stored thereon a computer program which, when executed by a processor, implements a pipeline defect detection method as described above.
According to one aspect of the embodiments of the present application, a computer program product, the computer program product comprising a computer program, the computer program being stored in a storage medium, a processor of a computer device reading the computer program from the storage medium, the processor executing the computer program such that the computer device, when executed, implements a pipeline defect detection method as described above.
The beneficial effects that this application provided technical scheme brought are:
in the technical scheme, when the frame-level defect detection is carried out on a plurality of image frames to be detected in the pipeline video, the frame-level positioning and classification are carried out on the pipeline video automatically, so that the accurate positioning of the pipeline defects in time and space is realized, the screening and the de-duplication of the image frames are carried out according to the detection result of the frame level, a large amount of redundancy is removed through the screening and the de-duplication, whether the defect or damage exists in the pipeline video is avoided by manually analyzing, the automatic, efficient and accurate pipeline defect detection is realized, and the problem that the time and the labor are wasted when the key frames with the defects are searched from the pipeline video in the related technology can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic illustration of an implementation environment in which the present application is directed;
FIG. 2 is a flow chart illustrating a method of pipeline detection according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating step 310, according to an exemplary embodiment;
FIG. 4 is a schematic diagram of a result output shown in accordance with an exemplary embodiment;
FIG. 5 is a flowchart illustrating step 320, according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating step 330 according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating key frame screening and deduplication in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram of an application scenario illustrated in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating a pipeline defect detection apparatus according to an exemplary embodiment;
FIG. 10 is a hardware block diagram of an electronic device shown in accordance with an exemplary embodiment;
fig. 11 is a block diagram illustrating a structure of an electronic device according to an exemplary embodiment.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of illustrating the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
As described above, it is important to detect underground pipe network systems, which are one of the most important infrastructures in urban construction and development, periodically to find and repair defects in time.
In order to realize detection of an underground pipe network system, two pipeline defect detection methods based on pipeline QV (Quick-View) detection and pipeline CCTV (Closed Circuit Television Inspection, closed-circuit television detection) detection are proposed in the prior art. The pipeline QV detection is carried out by combining a high-definition variable-focus camera with the aid of lamplight illumination, so that clear pipeline internal images are acquired, and defects or damages existing in the pipeline are further searched; the CCTV detection of the pipeline mainly adopts a closed circuit television video recording mode, and image data is transmitted to an upper computer by using a camera device to enter the pipeline for pipeline defect detection.
Defects in a pipeline video are often continuously displayed for a certain period of time in the pipeline video, but the defects are identified as being marked in a frame in the pipeline video, which is called a key frame with defects, to represent defects in the period of time. For this key frame, it is often desirable to be able to clearly see the pipe defect and to be able to judge the severity of the pipe defect.
However, in the above-mentioned pipeline defect detection method, most of the pipeline defect detection methods are video level defect detection, that is, the pipeline video is analyzed and detected as a whole to output defects contained in the whole pipeline video, positioning in time sequence is not involved, and for the pipeline video which has a long duration and contains multiple defects, the pipeline video needs to be manually cut to perform time sequence positioning on the defects.
Further, although few methods implement frame-level defect detection, i.e., whether each image frame in the pipeline video contains defects and their defect categories, there is a lot of redundancy in the output image frames or the inability to find key frames from the output image frames due to the lack of a deduplication step or simply by the temporal degree of compactness of the output image frames.
As can be seen, the related art still has the problem that it takes time and effort to find the key frame from the pipeline video.
Therefore, the pipeline defect detection method provided by the application can accurately position and classify the defects in the pipeline, evaluate the severity of the defects, screen and de-duplicate and find out the key frames through the image frames to be detected in the pipeline video, realize the accurate positioning of the pipeline defects in time and space, avoid the manual realization of pipeline defect detection, and effectively improve the automation degree and efficiency of pipeline defect detection. Accordingly, the pipeline defect detection method is applicable to a pipeline defect detection device which can be deployed on an electronic device, wherein the electronic device can be a computer device deployed with a von neumann architecture, and the computer device can be a desktop computer, a notebook computer, a server and the like.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an implementation environment of a method for detecting a pipe defect. The implementation environment includes an image acquisition device 130, a gateway 150, a server side 170, and a router 190.
Specifically, the image capturing apparatus 130 may be an electronic apparatus for pipeline defect detection such as a pipeline QV periscope, a CCTV pipeline inspection robot, or the like.
The server 170 may be an electronic device with a communication connection function, such as a desktop computer, a notebook computer, or a server, or may be a cluster of computer devices formed by multiple servers, or even a cloud computing center formed by multiple servers. Wherein the server is configured to provide background services, such as background services including, but not limited to, pipeline defect detection services, and the like.
Interaction between the image capturing device 130 and the server 170 in one application scenario, the image capturing device 130 accesses the gateway 150 through a local area network, so as to perform data transmission with the server 170. The process of the image acquisition device 130 accessing the gateway 150 through the local area network includes: a local area network is first established by gateway 150, and image capture device 130 joins the local area network established by gateway 150 by connecting to gateway 150. Such local area networks include, but are not limited to: ZIGBEE, bluetooth, WIFI and the like, and further realize data transmission with the server 170 through the router 190. In another application scenario, the image capturing device 130 interacts with the server 170 through a wide area network, where the wide area network may be implemented by establishing a communication connection in a wired or wireless manner, for example, the wired or wireless manner includes, but is not limited to, 2G, 3G, 4G, 5G, WIFI, and so on, so as to implement data transmission with the server 170. For example, the data transmitted may be pipeline video or the like.
With the interaction between the image capturing device 130 and the server 170, the image capturing device 130 is responsible for capturing and transmitting the pipeline video to the server 170, so as to request the server 170 to provide the pipeline defect detection service.
For the server 170, pipeline defect detection may be performed based on the received pipeline video, specifically, sampling the pipeline defect to obtain a plurality of image frames to be detected, performing frame-level defect detection on the plurality of image frames to be detected respectively to obtain detection results of each image frame, further performing screening and/or de-duplication processing on each image frame according to the detection results of each image frame to obtain a key frame, and finally outputting the detection results of the key frame.
Based on the above process, automatic, efficient and accurate pipeline defect detection is realized.
Of course, according to the actual operation requirement, the image capturing device 110 and the server 170 may be integrated in the same electronic device, so that the pipeline defect detection process is completed by the same electronic device.
Referring to fig. 2, an embodiment of the present application provides a method for detecting a pipe defect, which is suitable for an electronic device, and the electronic device may be the server 170 in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
As shown in fig. 2, the method may include the steps of:
step 310, obtaining a pipeline video, and sampling the pipeline video to obtain a plurality of image frames to be detected.
Specifically, the pipeline video can be obtained by shooting a pipeline through a pipeline QV periscope and a CCTV pipeline detection robot, wherein an image shooting scene can comprise the inside of the pipeline and the outside of the pipeline.
After the pipeline video is obtained, the frame extraction is further carried out on a large number of image frames in the pipeline video aiming at a large number of image frames in the pipeline video so as to reduce the number of the image frames for carrying out the pipeline defect detection subsequently, thereby improving the efficiency of the pipeline defect detection. In this embodiment, frame extraction is performed by sampling an image frame in a pipeline video.
In one possible implementation, the sampling is performed at different frame rates. The frame rate may be flexibly set according to the actual requirement of the application scenario, which is not limited herein.
Specifically, as shown in fig. 3, the step 310 may include the following steps:
In step 311, in response to the selection operation for the different input modes, a frame rate matching the selected input mode is acquired.
Step 312, sampling the image frames in the pipeline video according to the acquired frame rate to obtain a plurality of image frames to be detected.
Specifically, the present application provides four different input modes: performance, balance, precision, and customization. Accordingly, the frame rates corresponding to the four different input modes are respectively: 1 frame per second, 10 frames per second, the video frame rate of the pipeline video, and the user-defined frame rate.
In order that the user can select different input models, and accordingly set the frame rate required by sampling, a selection entry is set in the electronic device, and the user can select different input modes by triggering a selection operation performed at the selection entry. For example, assume that the selection entry is a drop down list containing four input modes that can be selected, and if the user clicks on one of the input modes, the one input mode is considered to be the input mode selected by the user, and the clicking operation is considered to trigger the selection operation performed at the selection entry. It should be noted that the specific manner of the selection operation will also be different according to the different input components configured by the electronic device, for example, if the input components configured by the electronic device are touch screens, the selection operation may be gesture operations such as clicking, sliding, etc.; if the input component of the electronic device is a mouse, the selection operation may be a mechanical operation such as a single click, a double click, or a drag operation, which is not limited herein.
Responding to the selection operation of the user for four input modes, acquiring matched frame rates according to 1 frame per second, 10 frames per second, video frequency of the pipeline video and user-defined frame rates, and extracting frames from image frames in the pipeline video according to the acquired frame rates, so that a plurality of image frames to be detected under different frame rates can be obtained.
By setting four different input modes, the requirement selection under the calculation power of different equipment in different application scenes is met, and the generalization of the application is expanded.
Step 320, performing frame-level defect detection on the plurality of image frames to be detected, so as to obtain detection results of each image frame.
Firstly, the defect detection at the frame level refers to pipeline defect detection by taking only one image frame to be detected in a pipeline video as a detection object, and compared with the defect detection at the video level, the defect detection at the frame level eliminates the interference and errors possibly brought by adjacent frames, so that the accuracy of pipeline defect detection is more reliable.
And secondly, the detection result is used for indicating at least one of whether the pipeline is defect-free, whether an image shooting scene is in the pipeline, defect types and the confidence thereof, defect levels of the defect types, defect masks and the confidence thereof.
In one possible implementation, defect categories are used to represent different defects, which may include cracking, deformation, cracking, dislocation, disjointing, leakage, corrosion, rubber ring fall-off, branch pipe make-up, foreign object insertion, etc.
In one possible implementation, the defect levels of the defect categories are used to represent the severity of the different defects. Wherein the defect class may include 1-5 levels, with higher defect levels indicating more severe defects.
In one possible implementation, the confidence of the defect class may be any value within 0-1. The confidence of the defect type indicates the probability of the defect type predicted by the image frame to be detected, and it can be understood that the greater the confidence of the defect type, the more reliable the defect type predicted by the image frame to be detected.
In one possible implementation, the confidence level of the defect mask may be any value within 0-1. The defect mask is a detection frame for marking a region with a defect in an image frame to be detected, and it can be understood that the greater the confidence of the defect mask is, the more reliable the region with the defect in the image frame to be detected marked by the detection frame is.
Of course, in other embodiments, based on the defect type and its confidence and the defect mask and its confidence, the detection result may also be used to indicate a maximum confidence of the defect type, a total confidence of all defect types, a number of defect masks, a maximum confidence of the defect mask, a total confidence of all defect masks, and so on, which are not particularly limited herein.
And 330, screening and/or de-duplicating each image frame according to the detection result of each image frame to obtain a key frame.
Specifically, in order to reduce the auditing workload of the related inspector, the determined multiple image frames need to be subjected to screening and deduplication processing, and only the image frames meeting the screening and deduplication conditions are reserved as key frames.
And step 340, outputting the detection result of the key frame.
The output modes of the key frames and the corresponding detection results detected from the pipeline video include, but are not limited to, image output, video output, text output and the like for different application scenes and different requirements. Compared with the prior art that the output modes caused by the calculation force limited by the hardware equipment are limited, different output modes can be selected according to the actual requirements of application scenes and/or the calculation force of the hardware equipment, and the adaptability and the universality of the application are greatly improved.
In one possible implementation, the image output refers to marking the corresponding detection result in the key frame, and outputting the marked key frame.
In one possible implementation manner, video output refers to marking the key frames in the pipeline video based on the detection results of the key frames, and outputting the marked pipeline video.
In one possible implementation, the detection result of the key frame is output in a text manner. Wherein the text content may include at least one of: the clock position of the key frame in the pipeline video, whether the pipeline is defect-free, whether the image shooting scene is in the pipeline, the defect type and the confidence thereof, the maximum confidence of the defect type, the total confidence of all the defect types, the defect level of the defect type, the number of defect masks, the confidence of each defect mask, the maximum confidence of the defect mask, the total confidence of all the defect masks and the like.
In an exemplary embodiment, as shown in fig. 4 (a), a schematic diagram of an image frame to be detected, in which defects exist in a pipeline, when a detection result of a key frame is output according to an output mode of image output, as shown in fig. 4 (b), four defect types and related information predicted in the key frame are shown, wherein the defects respectively represented by the four defect types are a drop represented by TL, a deformation represented by BX, a rupture represented by PL, and a foreign object insertion represented by CR, and confidence degrees of the defect types are respectively 0.77, 0.14, 0.09, and 0.08, and defect grades of the defect types are respectively 4, 1, and 1; the defect masks corresponding to the defects are respectively represented by clock positions of the defects in the image frame, namely 1002, 0403 and 1002, wherein 1002 represents that the region of the defect in the image frame is a sector region between ten o 'clock positions and two o' clock positions, and 0403 represents that the region of the defect in the image frame is a sector region between four o 'clock positions and three o' clock positions; whether the image capture scene is inside the pipeline, wherein (0.71000004, out) indicates that the image capture scene has a 71% probability of being outside the pipeline and (0.29, int) indicates that the image capture scene has a 29% probability of being inside the pipeline; the pipe was defect-free, where (0.42, normal) indicated that the pipe had a 42% probability of being defect-free and (0.58, defect) indicated that the pipe had a 58% probability of being defect-free.
Through the process, the defect identification of the pipeline video is automatically realized, the accurate positioning in time and space is realized, a large amount of redundancy is removed through screening and duplicate removal, and the workload of detection personnel during checking is reduced. And the defect condition in the pipeline is more intuitively reflected through various result presentation modes of videos, images and texts. In addition, by setting the input frame rate to be adjustable, the computational power requirements of different devices and the user's trade-off of performance and accuracy are met.
In an exemplary embodiment, frame-level defect detection for an image frame is based on a defect detection model that is trained to have the ability to frame-level defect detection for the image frame. The deep learning model may be any one of FPN (Feature Pyramid Networks, feature pyramid), C3D, and TimeSformer, which are not limited herein.
In one possible implementation, the training process of the defect detection model may include the steps of: and inputting a key frame which has defects and contains defect labels into a deep learning model as a training image to detect the defects, calculating a loss value according to the detection result detected by the training image and the prediction accuracy between the defect labels in the training image, and determining whether the deep learning model is converged to obtain a defect detection model based on the loss value and a model convergence condition. The setting of the model convergence condition may be flexibly set according to the actual requirement of the application scenario, for example, the model convergence condition may refer to that the prediction accuracy between the detection result detected by the training image and the defect label in the training image reaches 95%, or may refer to that the model iteration number reaches a set threshold. Through the training process, a defect detection model with the capability of performing frame-level defect detection on the image frame is obtained.
The following describes the defect detection at the frame level based on the defect detection model:
referring to fig. 5, in an exemplary embodiment, step 320 may include the steps of:
step 321, extracting multidimensional features of the plurality of image frames to be detected, and obtaining multidimensional feature information of each image frame.
Step 322, according to the multi-dimensional characteristic information of each image frame, detecting the pipeline defect type of each image frame to obtain the detection result of each image frame.
Specifically, the multi-dimensional feature extraction performed on the plurality of image frames to be detected is to predict possible defects of different types and related information thereof from the image frames, so as to obtain multi-dimensional feature information of each image frame, where the multi-dimensional feature information is used for describing the multi-dimensional feature information including but not limited to: whether the pipeline is defective, whether the image shooting scene is in the pipeline, the defect type and the confidence thereof, the defect grade of the defect type, the defect mask and the confidence thereof and the like.
After the multi-dimensional characteristic information of each image frame is obtained, the regions possibly with defects in each image frame can be positioned and classified based on the multi-dimensional characteristic information of each image frame, and finally the detection result of each image frame is obtained. The detection results include, but are not limited to: the pipeline has no defects, whether an image shooting scene is in the pipeline, the defect type and the confidence thereof, the defect grade of the defect type, the defect mask and the confidence thereof. Wherein, the positioning means to mark the region possibly having the defect in the image frame through the detection frame, thereby obtaining each defect mask and the confidence level thereof; the classification refers to predicting the defect type of the possibly defective area marked by each detection frame in the image frame, so as to obtain each defect type and the confidence thereof.
Compared with the defect detection of the video level, the defect detection of the frame level eliminates the interference and error possibly brought by the adjacent frames, so that the accuracy of the pipeline defect detection is more reliable.
In an exemplary embodiment, the filtering processing for each image frame in step 330 according to the detection result of each image frame may include at least one of the following processing manners:
for each image frame, determining the confidence of each defect category and the confidence of each defect mask based on the detection result of the image frame, and determining the number of defect masks, the maximum confidence of the defect categories and the total confidence of all defect categories based on the confidence of each defect category and the confidence of each defect mask;
1. and if the detection result of the image frame indicates that the pipeline has defects and the maximum confidence coefficient of the defect type is greater than the first confidence coefficient threshold value, taking the image frame as a key frame.
Specifically, if the detection result of the image frame indicates that a defect exists in the pipeline, whether the maximum confidence coefficient of the defect type in the image frame exceeds a first confidence coefficient threshold value is further judged, if so, the image frame is reserved as a key frame, and if not, the key frame is removed.
For example, two defects exist in one image frame, and if the confidence degrees corresponding to the defect types of the two defects are smaller than a first confidence threshold value, the image frame is removed; otherwise, if the confidence coefficient corresponding to the defect type of any defect in the two defects is larger than the first confidence coefficient threshold value, the image frame is reserved as a key frame.
2. And if the number of the defect masks of the image frames is larger than the set number and the maximum confidence coefficient of the defect masks is larger than the second confidence coefficient threshold value, taking the image frames as key frames.
Specifically, according to the detection result of the image frame, the number of the defect masks in the image frame is indicated to exceed the set number, and the maximum confidence coefficient of the defect masks is larger than the second confidence coefficient threshold value, the image frame is reserved as a key frame, otherwise, the image frame is removed.
For example, there are three defects in one image frame, where the confidence level of the defect mask corresponding to two defects is 0.1 and 0.2, respectively, and the confidence level of the defect mask corresponding to another defect is 0.4, if 0.4 is greater than the second confidence level threshold, the image frame is reserved as the image frame, otherwise, if 0.4 is less than the second confidence level threshold, the image frame is removed.
3. And if the total confidence coefficient of all defect categories in the image frame is greater than the third confidence coefficient threshold value, taking the image frame as a key frame.
Specifically, the confidence coefficient of each defect category is determined according to the detection result of the image frame, the total confidence coefficient of all defect categories is further obtained according to the confidence coefficient of each defect category, if the total confidence coefficient of all defect categories exceeds a third confidence coefficient threshold value, the image frame is reserved as a key frame, and otherwise, the image frame is removed.
For example, there are three defects in one image frame, the confidence degrees of the defect categories corresponding to the three defects are respectively 0.1, 0.3 and 0.3, that is, the total confidence degree of the defect category corresponding to the three defects is 0.7, if the total confidence degree of all the defect categories exceeds the third confidence degree threshold, the image frame is reserved as a key frame, otherwise, when 0.7 is smaller than the third confidence degree threshold, the image frame is removed.
It should be noted that, the number of defects and the confidence thresholds listed in the embodiment may be flexibly set according to the actual needs of the application scenario, which is not limited herein; meanwhile, the listed screening processing methods are not limited to be independently implemented, and may be implemented after being freely combined, for example, the screening processing methods of 1 and 2 are combined, and if the number of defect masks of the image frames exceeds the set number and the maximum confidence of the defect masks does not exceed the fourth confidence threshold, whether at least one defect exists in the pipeline and whether the maximum confidence of the defect category corresponding to the existing defect exceeds the first confidence threshold are further determined, so that the screening processing of each image frame to be detected is implemented to obtain the key frame.
Further, in an exemplary embodiment, as shown in fig. 6, in step 330, performing the deduplication processing on each image frame according to the detection result of each image frame may include the following steps:
step 331, performing traversal on each image frame, and taking the traversed image frame as a current image frame.
Step 332, determining the defect type and the confidence level of the current image frame based on the detection result of the current image frame.
Step 333, if the interval time between the current image frame and the previous image frame is less than the set time and the confidence of the current image frame and the previous image frame is greater than the defect type of the fourth confidence threshold, reserving the current image frame.
For example, for a certain image frame S1 in the pipeline video, if the interval time between the image frame S1 and the previous image frame S2 is smaller than the set time, further judging whether the image frame S1 can be reserved according to each defect type and the confidence level of the image frame S1 and each defect type and the confidence level of the image frame S2, specifically, firstly, determining the defect type that the confidence level of the image frame S1 and the image frame S2 is greater than the fourth confidence level threshold value respectively, then judging whether the defect type determined by the image frame S1 and the image frame S2 is the same based on the defect type that the determined confidence level is greater than the fourth confidence level threshold value, if the defect type is the same, reserving the image frame S1, otherwise, removing the image frame S1 if the defect type is different. The setting time can be flexibly adjusted according to the actual requirement of the application scene, which is not limited herein.
It should be noted that the execution order of the screening process and the deduplication process is not limited to the screening process or the deduplication process provided in the present embodiment, and the screening process and the deduplication process may be performed simultaneously, for example, the screening process and the deduplication process are performed first, or the deduplication process and the screening process are performed first, and the present embodiment is not particularly limited to this configuration.
As shown in fig. 7, in one possible implementation, the filtering process and the deduplication process of the key frame may include the following steps:
step 1, if the number of the defect masks in the image frame is greater than 2 and the maximum confidence coefficient of the defect masks exceeds 0.2, executing step 3, and judging whether the maximum confidence coefficient of the defect masks exceeds 0.3; otherwise, step 2 is performed.
Step 2, if at least one defect exists in the pipeline and the maximum confidence coefficient of the defects exceeds 0.2, executing step 5; otherwise, the image frame is deleted.
Step 3, if the maximum confidence coefficient of the defect mask exceeds 0.3, executing step 5, and further performing de-duplication processing on the image frame; otherwise, step 4 is performed.
Step 4, if the total confidence coefficient of the defect class exceeds 0.6, executing step 5, and further performing de-duplication processing on the image frame; otherwise, the image frame is deleted.
Step 5, if the time interval between the image frame and the previous image frame is less than 5 seconds and the confidence of the two is greater than 0.2 and the defect types are the same, the image frame is reserved as a key frame; otherwise, the image frame is deleted.
Through the cooperation of the embodiments, the multi-dimensional screening processing and the de-duplication processing of the image frames to be detected are realized, the accurate positioning of the pipeline defects in time and space is realized, and a large amount of redundancy or incapability of accurately searching key frames in the final result output process can be avoided.
Referring to fig. 8, in an application scenario, the defect detection device may be deployed in an image capturing device, various electronic devices that may be used as an application, various electronic devices that may be equipped with a chip having a defect detection capability, and a cloud computing center, a cloud platform, a cloud service, etc. having a defect detection capability.
In fig. 8, the defect detecting device includes a video input module, a video feature extraction module, an analysis and processing module, and a result output module.
Specifically, firstly, the video input module reads the pipeline video, and samples the image frames in the pipeline video according to 1 frame per second, 10 frames per second, the video frame rate and the user input frame rate according to the difference of the input modes selected by the user, so as to form an image frame sequence, wherein the image frame sequence comprises a plurality of image frames to be detected.
And then, inputting the image frame sequence into a video feature extraction module, extracting multi-dimensional features of a frame level from each image frame of the image frame sequence, and further carrying out category prediction on whether defects exist in each image frame based on the multi-dimensional features of each image frame to obtain a detection result of each image frame. The video feature extraction module is composed of a defect detection model trained by a deep learning model; the extracted multi-dimensional features are used to describe the features including, but not limited to: whether the image shooting scene of each image frame is in a pipeline, whether the pipeline is defect-free, the defect type and the confidence level thereof, the defect level of the defect type, the defect mask and the confidence level thereof and the like; accordingly, the detection results include, but are not limited to: the pipeline has no defects, whether an image shooting scene is in the pipeline, the defect type and the confidence thereof, the defect grade of the defect type, the defect mask and the confidence thereof.
And then, inputting the detection result of each image frame into an analysis and processing module, analyzing the detection result of the frame level, screening out key frames according to the analysis and processing module, and further removing the duplication.
And finally, inputting the de-duplicated key frames and detection results thereof into a result output module, and respectively marking the information such as the defect type, the defect level, the confidence level of the defect type, the defect mask, the clock position of the key frame with the defect in the pipeline video in the key frames, the key frames in the pipeline video or storing the key frames as text contents as result texts by the result output module, and further outputting the marked key frames, the marked pipeline video and the result texts.
In the application scene, the defect detection of the frame level is automatically carried out on the pipeline video, the accurate positioning in time and space is realized, a large amount of redundancy is removed through screening and de-duplication, and the workload of detection personnel during checking is reduced. In addition, through various result presentation modes such as video, image, text and the like, the defect condition in the pipeline is reflected more intuitively, so that the pipeline defect detection method provided by the application scene has wide application range, can be suitable for pipeline systems such as urban drainage pipelines, oil and gas transportation pipelines, water supply and delivery pipelines, electric power system pipelines and the like, and has good adaptability; and through adjustable frame rate, the computational power requirements of different hardware devices and the trade-off of the user on performance and accuracy are met, so that the pipeline defect detection method provided by the application scene can be carried with various platforms, and the universality of the pipeline defect detection method is improved.
The following is an embodiment of the apparatus of the present application, which may be used to perform the pipe defect detection method of the present application. For details not disclosed in the system embodiments of the present application, please refer to a method embodiment of the pipeline defect detection method related to the present application.
Referring to fig. 9, an embodiment of the present application provides a pipe defect detection apparatus 600, including but not limited to: an image acquisition module 610, a defect detection module 620, a screening deduplication module 630, and a result output module 640.
The image acquisition module 610 is configured to acquire a pipeline video, and sample the pipeline video to obtain a plurality of image frames to be detected.
The defect detection module 620 is configured to perform frame-level defect detection on a plurality of image frames to be detected, so as to obtain a detection result of each image frame. The detection result is used for indicating at least one of whether the pipeline is defect-free, whether an image shooting scene is in the pipeline, defect types and the confidence thereof, defect levels of the defect types, defect masks and the confidence thereof.
The filtering and de-duplication module 630 is configured to perform filtering and/or de-duplication processing on each image frame according to the detection result of each image frame, so as to obtain a key frame.
And a result output module 640, configured to output a detection result of the key frame.
It should be noted that, when the pipe defect detecting device provided in the foregoing embodiment performs pipe defect detection, only the division of the foregoing functional modules is used as an example, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the pipe defect detecting device is divided into different functional modules, so as to complete all or part of the functions described above.
In addition, the pipe defect detection apparatus and the pipe defect detection method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, which is not repeated herein.
Fig. 10 is a schematic diagram showing a structure of an electronic device according to an exemplary embodiment. The electronic device is suitable for use at the server side 170 in the implementation environment shown in fig. 1.
It should be noted that the electronic device is just one example adapted to the present application, and should not be construed as providing any limitation to the scope of use of the present application. Nor should the electronic device be construed as necessarily relying on or necessarily having one or more of the components of the exemplary electronic device 2000 illustrated in fig. 10.
The hardware structure of the electronic device 2000 may vary widely depending on the configuration or performance, as shown in fig. 10, the electronic device 2000 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Specifically, the power supply 210 is configured to provide an operating voltage for each hardware device on the electronic device 2000.
The interface 230 includes at least one wired or wireless network interface 231 for interacting with external devices. For example, FIG. 1 illustrates an image acquisition device 130 and a server side 170 in an implementation environment.
Of course, in other examples of adaptation of the present application, the interface 230 may further include at least one serial-parallel conversion interface 233, at least one input-output interface 235, and at least one USB interface 237, as shown in fig. 10, which is not specifically limited herein.
The memory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include an operating system 251, application programs 253, and data 255, and the storage mode may be transient storage or permanent storage.
The operating system 251 is used for managing and controlling various hardware devices and applications 253 on the electronic device 2000, so as to implement the operation and processing of the cpu 270 on the mass data 255 in the memory 250, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in fig. 10), each of which may respectively include a computer program for the electronic device 2000. For example, the pipeline defect detection apparatus may be considered as an application 253 deployed on the electronic device 2000.
The data 255 may be a photograph, an image, or the like stored in a disk, or may be a pipeline video, or the like, and is stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 via at least one communication bus to read the computer program stored in the memory 250, thereby implementing the operation and processing of the bulk data 255 in the memory 250. The pipe defect detection method is accomplished, for example, by the central processor 270 reading a series of computer programs stored in the memory 250.
Furthermore, the present application can be realized by hardware circuitry or by a combination of hardware circuitry and software, and thus, the implementation of the present application is not limited to any specific hardware circuitry, software, or combination of the two.
Referring to fig. 11, an embodiment of the present application provides an electronic device 4000, which may be a desktop computer, a notebook computer, a server, or the like.
In fig. 11, the electronic device 4000 includes at least one processor 4001, at least one communication bus 4002, and at least one memory 4003.
Wherein the processor 4001 is coupled to the memory 4003, such as via a communication bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The processor 4001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
The communication bus 4002 may include a pathway to transfer information between the aforementioned components. The communication bus 4002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 has stored thereon a computer program, and the processor 4001 reads the computer program stored in the memory 4003 through the communication bus 4002.
The computer program, when executed by the processor 4001, implements the pipe defect detection method in the above embodiments.
Further, in the embodiments of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the pipe defect detection method in the above embodiments.
In an embodiment of the present application, a computer program product is provided, which includes a computer program stored in a storage medium. The processor of the computer device reads the computer program from the storage medium, and the processor executes the computer program so that the computer device executes the pipe defect detection method in the above embodiments.
Compared with the related art, on one hand, the method realizes accurate positioning in time and space while realizing automatic defect detection and identification of the pipeline video, and reduces the workload of related detection personnel during checking by screening and de-duplication to remove a large amount of redundancy; on the other hand, the defect condition of the pipeline is reflected more intuitively through various result presentation modes of videos, images and texts. In addition, according to the frame extraction mode of setting different frequencies for the pipeline video, the computational power requirements of different devices and the trade-off of the user on performance and accuracy are met. Meanwhile, the pipeline defect detection mode provided by the application can be suitable for various pipelines such as urban drainage pipelines, oil and gas transportation pipelines, water supply and delivery pipelines, electric power system pipelines and the like, and can be mounted on various platforms such as image acquisition equipment, chips, software, cloud service and the like.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method of detecting a pipe defect, the method comprising:
acquiring a pipeline video, and sampling the pipeline video to obtain a plurality of image frames to be detected;
Performing frame-level defect detection on a plurality of image frames to be detected respectively to obtain detection results of the image frames; the detection result is used for indicating whether a pipeline is defect-free, whether an image shooting scene is in the pipeline, at least one of defect type and confidence level thereof, defect level of the defect type, defect mask and confidence level thereof;
according to the detection result of each image frame, screening and/or de-duplication processing is carried out on each image frame to obtain a key frame;
and outputting the detection result of the key frame.
2. The method of claim 1, wherein the sampling the pipeline video to obtain a plurality of image frames to be detected comprises:
responding to selection operation for different input modes, and acquiring a frame rate matched with the selected input mode;
and sampling the image frames in the pipeline video according to the acquired frame rate to obtain a plurality of image frames to be detected.
3. The method of claim 1, wherein performing frame-level defect detection on the plurality of image frames to be detected to obtain a detection result of each image frame includes:
respectively extracting multidimensional features of a plurality of image frames to be detected to obtain multidimensional feature information of each image frame;
And respectively predicting the pipeline defect types of the image frames according to the multidimensional characteristic information of the image frames to obtain detection results of the image frames.
4. The method of claim 1, wherein the frame-level defect detection is implemented based on a defect detection model that is a trained deep learning model having the ability to perform frame-level defect detection on image frames.
5. The method of claim 1, wherein said filtering each of said image frames based on the detection result of each of said image frames comprises at least one of:
for each image frame, determining the confidence of each defect category and the confidence of each defect mask based on the detection result of the image frame, and determining the number of defect masks, the maximum confidence of the defect categories and the total confidence of all defect categories based on the confidence of each defect category and the confidence of each defect mask;
if the detection result of the image frame indicates that the pipeline has defects and the maximum confidence coefficient of the defect type is larger than a first confidence coefficient threshold value, the image frame is taken as a key frame;
If the number of the defect masks of the image frames is larger than the set number and the maximum confidence coefficient of the defect masks is larger than a second confidence coefficient threshold value, the image frames are used as key frames;
and if the total confidence coefficient of all defect categories in the image frame is greater than a third confidence coefficient threshold value, taking the image frame as a key frame.
6. The method of claim 1, wherein said performing a de-duplication process on each of said image frames based on a detection result of each of said image frames comprises:
traversing each image frame, wherein the traversed image frame is used as the current image frame;
determining the defect type and the confidence coefficient of the current image frame based on the detection result of the current image frame;
and if the interval time between the current image frame and the previous image frame is smaller than the set time and the confidence coefficient of the current image frame and the previous image frame is larger than the defect type of the fourth confidence coefficient threshold value, reserving the image frame.
7. The method of any of claims 1 to 6, wherein outputting the detection result of the key frame comprises at least one of:
Marking a corresponding detection result in the key frame, and outputting the marked key frame;
labeling the key frames in the pipeline video based on the detection results of the key frames, and outputting the labeled pipeline video;
and outputting the detection result of the key frame in a text mode.
8. A pipe defect detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a pipeline video and sampling the pipeline video to obtain a plurality of image frames to be detected;
the defect detection module is used for respectively carrying out frame-level defect detection on a plurality of image frames to be detected to obtain detection results of the image frames; the detection result is used for indicating whether a pipeline is defect-free, whether an image shooting scene is in the pipeline, at least one of defect type and confidence level thereof, defect level of the defect type, defect mask and confidence level thereof;
the screening and de-duplication module is used for carrying out screening treatment and/or de-duplication treatment on each image frame according to the detection result of each image frame to obtain a key frame;
and the result output module is used for outputting the detection result of the key frame.
9. An electronic device, comprising: at least one processor, at least one memory, and at least one communication bus, wherein,
the memory stores a computer program, and the processor reads the computer program in the memory through the communication bus;
the computer program, when executed by the processor, implements the pipe defect detection method of any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program, which when executed by a processor implements the pipe defect detection method according to any one of claims 1 to 7.
CN202310025186.5A 2023-01-09 2023-01-09 Pipeline defect detection method and device, electronic equipment and storage medium Pending CN116416208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310025186.5A CN116416208A (en) 2023-01-09 2023-01-09 Pipeline defect detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310025186.5A CN116416208A (en) 2023-01-09 2023-01-09 Pipeline defect detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116416208A true CN116416208A (en) 2023-07-11

Family

ID=87058777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310025186.5A Pending CN116416208A (en) 2023-01-09 2023-01-09 Pipeline defect detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116416208A (en)

Similar Documents

Publication Publication Date Title
US11276158B2 (en) Method and apparatus for inspecting corrosion defect of ladle
US10937144B2 (en) Pipe feature identification using pipe inspection data analysis
US10438050B2 (en) Image analysis device, image analysis system, and image analysis method
CN112613569B (en) Image recognition method, training method and device for image classification model
CN112434178A (en) Image classification method and device, electronic equipment and storage medium
CN116823793A (en) Device defect detection method, device, electronic device and readable storage medium
CN111121797B (en) Road screening method, device, server and storage medium
CN111815576B (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
US20130322682A1 (en) Profiling Activity Through Video Surveillance
CN113660484B (en) Audio and video attribute comparison method, system, terminal and medium based on audio and video content
Bhargava et al. A study on potential of big visual data analytics in construction Arena
CN113095563A (en) Method and device for reviewing prediction result of artificial intelligence model
US10083720B2 (en) Method and system for video data stream storage
CN112529836A (en) High-voltage line defect detection method and device, storage medium and electronic equipment
CN116416208A (en) Pipeline defect detection method and device, electronic equipment and storage medium
CN116152576A (en) Image processing method, device, equipment and storage medium
CN114550129B (en) Machine learning model processing method and system based on data set
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
CN114972893A (en) Data labeling method and device, electronic equipment and storage medium
CN112990350A (en) Target detection network training method and target detection network-based coal and gangue identification method
CN114067145A (en) Passive optical splitter detection method, device, equipment and medium
CN115187884A (en) High-altitude parabolic identification method and device, electronic equipment and storage medium
US11645766B2 (en) Dynamic sampling for object recognition
CN110781796B (en) Labeling method and device and electronic equipment
WO2024114452A1 (en) Gas leakage detection method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination