CN116342538A - Method and device for detecting running and leaking, intelligent equipment and storage medium - Google Patents

Method and device for detecting running and leaking, intelligent equipment and storage medium Download PDF

Info

Publication number
CN116342538A
CN116342538A CN202310313495.2A CN202310313495A CN116342538A CN 116342538 A CN116342538 A CN 116342538A CN 202310313495 A CN202310313495 A CN 202310313495A CN 116342538 A CN116342538 A CN 116342538A
Authority
CN
China
Prior art keywords
image
target
normal sample
running
monitoring point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310313495.2A
Other languages
Chinese (zh)
Inventor
彭志远
王开雄
徐劲莉
余亚玲
董琼
肖倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Digital Technology Co Ltd
Original Assignee
Shenzhen Launch Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Digital Technology Co Ltd filed Critical Shenzhen Launch Digital Technology Co Ltd
Priority to CN202310313495.2A priority Critical patent/CN116342538A/en
Publication of CN116342538A publication Critical patent/CN116342538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application is applicable to the technical field of safety monitoring, and provides a running and leaking detection method, a device, intelligent equipment and a storage medium, wherein the method is applied to a patrol robot and comprises the following steps: acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, inputting the target inspection image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image dataset and used for calculating the image characteristic change value, and the normal sample image dataset comprises normal sample images without liquid or gas leakage of each monitoring point; and detecting the running and leaking of the target monitoring point based on the image characteristic change value. By adopting the method, the generalization capability of the deep learning model can be effectively improved so as to rapidly cope with new scenes, and the detection efficiency and accuracy of the inspection robot on the running and leaking are improved.

Description

Method and device for detecting running and leaking, intelligent equipment and storage medium
Technical Field
The application relates to the technical field of safety monitoring, in particular to a method and a device for detecting running and leaking, intelligent equipment and a storage medium.
Background
In the operation and maintenance process of the industrial field, the detection of the leakage and the leakage is one of the most important works, and the production cost and the production accidents of enterprises can be effectively reduced by timely detecting the leakage of liquid and gas of industrial equipment. However, in the current practical operation and maintenance, the detection of the running and leaking mainly depends on a manual regular inspection mode, so that the cost is huge, the inspection safety coefficient is low, and the efficiency is low. Along with the development of computer vision technology and neural network, the computer vision technology and neural network are gradually applied to the inspection robot, and the inspection robot is utilized for detecting the running, the leaking and the leakage.
The actual investigation shows that the current method for detecting the leakage of the leakage is a supervised learning network, liquid and gas leakage images are acquired through an industrial site, and positive and negative samples of the leakage are constructed to train a fitting deep learning model. However, the industrial site often does not have a relatively large number of leakage samples collected, and the problem of insufficient sample number or unbalanced positive and negative samples occurs in the training process, so that the model has better detection capability in the seen scene and poorer generalization capability in the new scene.
In summary, how to improve the generalization capability of the deep learning model so as to quickly cope with new scenes and improve the detection efficiency and accuracy of the inspection robot on the running and leaking is a problem to be considered currently.
Disclosure of Invention
The embodiment of the application provides a method, a device, intelligent equipment and a storage medium for detecting the running and the leaking, which can effectively improve the generalization capability of a deep learning model so as to rapidly cope with a new scene and improve the efficiency and the accuracy of detecting the running and the leaking by a patrol robot.
In a first aspect, an embodiment of the present application provides a method for detecting a running hazard and a drip, which is applied to a patrol robot, including:
acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, wherein the target inspection image is an image shot from the current inspection of the inspection robot to the monitoring point, and the target recording image is a pre-recorded normal image of the monitoring point;
inputting the target inspection image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image data set and is used for calculating the image characteristic change value, and the normal sample image data set comprises normal sample images with no liquid or gas leakage at each monitoring point;
And detecting the running and leaking of the target monitoring point based on the image characteristic change value.
In a possible implementation manner of the first aspect, before the step of inputting the target patrol image and the target record image into a pre-trained neural network model, the method includes:
the method comprises the steps of constructing a neural network model, wherein the neural network model comprises a feature extraction network and an asymmetric comparison twin network, the feature extraction network is used for extracting image features, and the asymmetric comparison twin network is used for outputting image feature change values after comparing the image features;
acquiring a normal sample image data set, and forming normal sample images of different moments of the same monitoring site in the normal sample image data set into a normal sample pair;
sequentially inputting the normal sample images in the normal sample pair to the feature extraction network, and extracting image features of the normal sample images by using the feature extraction network;
inputting the extracted image characteristics of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair;
and based on the loss, iteratively updating the model parameters of the neural network model until the loss meets a preset threshold value, and obtaining the neural network model after training.
In a possible implementation manner of the first aspect, the asymmetric alignment twin network includes a first encoding module, a second encoding module and a projection module, where the first encoding module and the second encoding module are parameter-shared;
the step of inputting the extracted image features of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair comprises the following steps:
the extracted image features of the normal sample image are simultaneously input to the first coding module and the second coding module, and a first coding result and a second coding result are obtained;
transforming the first coding result and the second coding result by using the projection module respectively;
matching the second coding result with the transformed first coding result, and matching the first coding result with the transformed second coding result;
and calculating the loss of the normal sample pair according to the matching result.
In a possible implementation manner of the first aspect, the step of calculating the loss of the normal sample pair according to the result of the matching includes:
matching the second coding result with the transformed first coding result according to the following formula
Figure BDA0004151783430000031
Figure BDA0004151783430000032
Wherein P is 1 Representing the result output by the projection module after transforming the first coding result, E 2 Representing the second encoding result;
matching the first encoding result with the transformed second encoding result according to the following formula
Figure BDA0004151783430000033
Figure BDA0004151783430000034
Wherein P is 2 Representing the result output by the projection module after transforming the second coding result, E 1 Representing the first encoding result;
the Loss of the normal sample pair is calculated according to the following formula:
Figure BDA0004151783430000035
in a possible implementation manner of the first aspect, the step of performing the running-off and drip detection on the target monitoring point based on the image feature variation value includes:
if the image characteristic change value is larger than or equal to a preset change threshold value, determining that the target monitoring point position has running and leaking;
and if the image characteristic change value is smaller than the preset change threshold value, determining that the target monitoring point position does not have running and leaking.
In a possible implementation manner of the first aspect, the image feature variation value specifically includes an image feature variation value of each pixel point in the target inspection chart;
after the step of determining that the target monitoring point position has the running and leaking phenomenon if the image characteristic change value is greater than or equal to the preset change threshold value, the method further comprises the following steps:
And positioning and outputting the running and leaking positions based on the pixel points with the image characteristic change value larger than or equal to the preset change threshold.
In a possible implementation manner of the first aspect, the method for detecting a running leak further includes:
preprocessing the target recording image to obtain a support data set comprising the target recording image, wherein the support data set is used for amplifying normal sample image data of the monitoring point;
the step of inputting the target patrol image and the target recording image to a pre-trained neural network model to obtain an image characteristic change value of the target patrol image relative to the target recording image comprises the following steps:
respectively extracting features of the target inspection image and the support data set to obtain a target inspection feature image corresponding to the target inspection image and recording point feature images of different scales corresponding to the support data set;
performing image fusion on the recording point characteristic images with different scales to obtain a target recording point characteristic image;
calculating the mahalanobis distance between the target inspection feature image and the target recording point feature image;
And determining an image characteristic change value of the target patrol image relative to the target record image based on the mahalanobis distance.
In a second aspect, embodiments of the present application provide a running-off drip detection device, applied to a patrol robot, including:
the image acquisition module is used for acquiring a target inspection image of a monitoring point and a target recording point image of the monitoring point, wherein the target inspection image is an image shot by the inspection robot when the inspection robot is currently inspecting the monitoring point, and the target recording point image is a pre-recorded normal image of the monitoring point;
the characteristic change detection unit is used for inputting the target inspection image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image data set and used for calculating the image characteristic change value, and the normal sample image data set comprises normal sample images without liquid or gas leakage at each monitoring point;
and the running-off and dripping detection unit is used for carrying out running-off and dripping detection on the target monitoring point position based on the image characteristic change value.
In a possible implementation manner of the second aspect, the running-out and drip detecting device further includes:
the module construction unit is used for constructing a neural network model, wherein the neural network model comprises a feature extraction network and an asymmetric comparison twin network, the feature extraction network is used for extracting image features, and the asymmetric comparison twin network is used for outputting image feature change values after comparing the image features;
the sample data acquisition unit is used for acquiring a normal sample image data set and forming normal sample images of the same monitoring site in the normal sample image data set at different moments into a normal sample pair;
the model training unit is used for sequentially inputting the normal sample images in the normal sample pair to the feature extraction network, and extracting the image features of the normal sample images by using the feature extraction network; inputting the extracted image characteristics of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair; and based on the loss, iteratively updating the model parameters of the neural network model until the loss meets a preset threshold value, and obtaining the neural network model after training.
In a possible implementation manner of the second aspect, the asymmetric alignment twin network includes a first encoding module, a second encoding module and a projection module, where the first encoding module and the second encoding module are parameter-shared; the model training unit includes:
the encoding module is used for inputting the extracted image characteristics of the normal sample image to the first encoding module and the second encoding module at the same time to obtain a first encoding result and a second encoding result;
the transformation processing module is used for transforming the first coding result and the second coding result by utilizing the projection module respectively;
the matching module is used for matching the second coding result with the transformed first coding result, and the first coding result is matched with the transformed second coding result;
and the loss calculation module is used for calculating the loss of the normal sample pair according to the matching result.
In a possible implementation manner of the second aspect, the loss calculation module is specifically configured to:
matching the second coding result with the transformed first coding result according to the following formula
Figure BDA0004151783430000061
Figure BDA0004151783430000062
Wherein P is 1 Representing the result output by the projection module after transforming the first coding result, E 2 Representing the second encoding result;
matching the first encoding result with the transformed second encoding result according to the following formula
Figure BDA0004151783430000063
Figure BDA0004151783430000064
Wherein P is 2 Representing the result output by the projection module after transforming the second coding result, E 1 Representing the first encoding result;
the Loss of the normal sample pair is calculated according to the following formula:
Figure BDA0004151783430000065
in a possible implementation manner of the second aspect, the running-out drip detection unit includes:
the first detection determining module is used for determining that the target monitoring point position has running and leaking if the image characteristic change value is larger than or equal to a preset change threshold value;
and the second detection and determination module is used for determining that the target monitoring point position does not have running and leaking if the image characteristic change value is smaller than the preset change threshold value.
In a possible implementation manner of the second aspect, the image feature variation value specifically includes an image feature variation value of each pixel point in the target inspection chart; the running-off drip detection unit further includes:
and the positioning output module is used for positioning and outputting the leakage position based on the pixel points of which the image characteristic change values are larger than or equal to the preset change threshold.
In a possible implementation manner of the second aspect, the running-out and drip detecting device further includes:
a support set acquisition unit, configured to pre-process the target recording image to obtain a support data set including the target recording image, where the support data set is used to amplify normal sample image data of the monitoring point;
the feature change detection unit includes:
the feature extraction module is used for respectively extracting features of the target inspection image and the support data set to obtain target inspection feature images corresponding to the target inspection image and recording point feature images with different scales corresponding to the support data set;
the image fusion module is used for carrying out image fusion on the point characteristic images with different scales to obtain a target point characteristic image;
the distance calculation module is used for calculating the mahalanobis distance between the target inspection characteristic image and the target recording point characteristic image;
and the change value determining module is used for determining an image characteristic change value of the target patrol image relative to the target record image based on the mahalanobis distance.
In a third aspect, an embodiment of the present application provides an intelligent device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for detecting a running and leaking drop according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, which when executed by a processor implements the running-out drip detection method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product which, when run on a smart device, causes the smart device to perform the method of detecting a running-out drip as described in the first aspect above.
In the embodiment of the application, by acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, inputting the target inspection image and the target recording image into a neural network model which is trained in advance to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image dataset and is used for calculating an image characteristic change value, the normal sample image dataset comprises normal sample images without liquid or gas leakage of each monitoring point, namely, the inspection robot normally records the recording image of the monitoring point, and then, based on the image characteristic change value, the leakage detection is carried out on the target inspection point. According to the scheme, an unsupervised network is adopted, only a normal sample image is needed for model training, no manual shooting or negative sample image manufacturing is needed on site, the trained neural network model is used for obtaining the change of the inspection graph relative to the recorded graph, the running-off and leaking detection is carried out based on the change, the detection can be carried out under a new scene without iteration model parameters, the continuous input of manual acquisition on-site data along with the increase of new monitoring points is avoided, the generalization capability of a deep learning model can be effectively improved, the new scene can be conveniently and rapidly handled, and the running-off and leaking detection efficiency and accuracy of the inspection robot are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a method for detecting a running hazard and a drip leakage according to an embodiment of the present application;
FIG. 2 is a flowchart of a specific implementation of building and training the neural network model in the method for detecting a running error and a drip provided in the embodiment of the present application;
FIG. 3 is a flowchart of a specific implementation of obtaining the loss of the normal sample pair in the method for detecting a running error and a drip according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of a neural network model in the method for detecting a running error and a drip according to the embodiment of the present application;
fig. 5 is a flowchart of a specific implementation of step S102 in the method for detecting a running hazard and a drip according to the embodiment of the present application;
fig. 6 is a flowchart of a specific implementation of step S103 in the method for detecting a running hazard and a drip according to the embodiment of the present application;
Fig. 7 is a block diagram of a running-off and drip detecting device according to an embodiment of the present application;
fig. 8 is a schematic diagram of an intelligent device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The current deep learning detection models of the running and the leaking are all supervised learning networks, the supervised learning networks have good effects under the condition that the form of a detection target can be exhausted, the effect of the supervised learning networks on the condition that the form of the detection target is fluid is poor, the running and the leaking of the detection target is liquid or gas, training samples cannot exhaust the form of the target, more leakage samples are usually not collected on an industrial site, the problems of insufficient sample number or unbalanced positive and negative samples can occur in the training process, the detection capability of the model under the condition that the model is seen is good, the training model needs to be iterated again under the condition that the model is new, the operation is complex, the time is long, the generalization capability is poor, and the detection efficiency is low.
In order to solve the above problems, embodiments of the present application provide a method, an apparatus, an intelligent device, and a storage medium for detecting a running and leaking device, which are specifically described below.
The method for detecting the running and leaking can be applied to intelligent equipment such as a patrol robot, and the specific type of the intelligent equipment is not limited.
Fig. 1 shows an implementation flow of the running-off and drip detecting method provided in the embodiment of the present application, where the method flow includes steps S101 to S103. The specific implementation principle of each step is as follows:
S101: and acquiring a target patrol image of the monitoring point and a target record image of the monitoring point.
The target inspection image is an image shot from the inspection robot to the monitoring point, and the target recording image is a pre-recorded normal image of the monitoring point. The normal image refers to an image in which no liquid or gas leaks.
In some embodiments, the inspection robot records normal images of each monitoring point at different times in advance, acquires inspection time of the target inspection image, and determines the recording point image with the recording time corresponding to the inspection time as the target recording point image.
S102: and inputting the target patrol image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target patrol image relative to the target recording image.
The neural network model is a network model which is trained based on a normal sample image data set and is used for calculating an image characteristic change value, wherein the normal sample image data set comprises normal sample images of each monitoring point without liquid or gas leakage.
As a possible implementation manner of the present application, fig. 2 shows a specific implementation flow for constructing and training the neural network model in the embodiment of the present application, which is described in detail below:
A1: and constructing a neural network model, wherein the neural network model comprises a feature extraction network and an asymmetric comparison twin network. The feature extraction network is used for extracting image features, and the asymmetric comparison twin network is used for outputting image feature change values after comparing the image features. The specific structure of the characteristic network and the asymmetric comparison twin network can refer to the prior art.
A2: and acquiring a normal sample image data set, and forming normal sample images of the same monitoring site in the normal sample image data set at different moments into a normal sample pair.
In this embodiment, normal images at different moments are recorded in advance at each monitoring site as normal sample images, and a normal image is recorded at the same moment at the same monitoring site as the normal sample image of the monitoring site at the moment. The specific recording time may be customized, or the specific recording time may be determined according to historical data analysis, which is not specifically limited in this embodiment.
In some embodiments, one monitoring site corresponds to a plurality of normal sample images at different moments, and the number of normal sample images corresponding to different monitoring sites may be the same or different, which may be specifically determined according to practical applications.
In one possible implementation, the initial number of recording moments at which the positive sample images are recorded at each monitoring site is the same; and acquiring historical monitoring data, and if the frequency of occurrence of the running and leaking of the first monitoring site is larger than a preset frequency threshold value based on the historical monitoring data, increasing the moment of recording the normal sample image of the first monitoring site, wherein the moment of recording the normal sample image of the first monitoring site is more than other monitoring sites.
In one possible implementation manner, a use duration of the monitoring device of each monitoring site is obtained, and based on the use duration, the number of recording moments of positive sample images of each monitoring site is determined.
A3: and sequentially inputting the normal sample images in the normal sample pair to the feature extraction network, and extracting the image features of the normal sample images by using the feature extraction network.
A4: and inputting the extracted image characteristics of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair.
As a possible implementation manner of the present application, the asymmetric alignment twin network includes a first encoding module, a second encoding module and a projection module, where the first encoding module and the second encoding module are in parameter sharing. Fig. 3 shows a specific implementation flow of obtaining the loss of the normal sample pair by inputting the extracted image features of the normal sample image into the asymmetric alignment twin network in the embodiment of the present application, which is described in detail below:
B1: and simultaneously inputting the extracted image features of the normal sample image to the first encoding module and the second encoding module to obtain a first encoding result and a second encoding result.
B2: and respectively transforming the first coding result and the second coding result by utilizing the projection module.
B3: and matching the second coding result with the transformed first coding result, and matching the first coding result with the transformed second coding result.
B4: and calculating the loss of the normal sample pair according to the matching result. Specifically, the loss of the normal sample pair is calculated according to the result of matching the second coding result with the transformed first coding result and the result of matching the first coding result with the transformed second coding result.
In one possible embodiment, the second encoding result is matched with the transformed first encoding result according to the following formula (1)
Figure BDA0004151783430000121
Figure BDA0004151783430000122
Wherein P is 1 Representing the result output by the projection module after transforming the first coding result, E 2 Representing the second encoding result;
matching the first encoding result with the transformed second encoding result according to the following formula (2)
Figure BDA0004151783430000123
Figure BDA0004151783430000131
Wherein P is 2 Representing the result output by the projection module after transforming the second coding result, E 1 Representing the first encoding result;
calculating a Loss of the normal sample pair according to the following formula (3):
Figure BDA0004151783430000132
the above equation (3) calculates the loss of symmetry across the twin network.
In one possible embodiment, to avoid the twinning collapse phenomenon of the twinning network, a Stop-grad operation is introduced, and the Loss of the normal sample pair is calculated according to the following formula (4):
Figure BDA0004151783430000133
in equation (3), the Loss function does not receive gradient information.
In the embodiment of the application, the network model training can be obtained based on the Loss function, only a normal sample is needed for end-to-end training, and the network characteristic is an unsupervised neural network.
A5: and based on the loss, iteratively updating the model parameters of the neural network model until the loss meets a preset threshold value, and obtaining the neural network model after training.
Illustratively, fig. 4 shows a schematic structural diagram of a neural network model in an embodiment of the present application. The first part is a feature extraction network for extracting high-level image features, the network is formed by alternately superposing a convolution module and a space transformation module, the convolution module is used for extracting image features rich in image from low dimension to high dimension, the space transformation module is a micro module with learning internal parameters, the convolution network can learn shape transformation of data, the detection result which is the same as that before transformation can be obtained for the image subjected to operations such as translation, rotation, scaling and cutting, and the like, and the detection result can be inserted at any position of the network as an independent module, so that the feature extraction network has space invariance for an input image while extracting image semantic features.
The second part is an asymmetric twin network for model parameter optimization, which mainly consists of a coding module and a projection module, wherein the coding module and the projection module consist of simple residual blocks. The image features of a pair of normal samples extracted by the feature extraction network are input to a pair of parameter sharing coding modules of the twin network, and the projection module transforms the result of one branch and matches the result of the other branch.
In the embodiment of the application, normal sample images acquired on site by the inspection robot form a normal sample image dataset, normal sample images at different monitoring points form normal sample pairs, images of the normal sample pairs are sequentially input to a trunk feature extraction network, image features of the normal samples are extracted, output of the images is input to an asymmetric twin network to obtain Loss of the normal sample pairs, reverse gradient is updated to the trunk feature extraction network, network parameters are continuously and iteratively updated, the trunk feature extraction network and the parameters are finally saved, and training of a neural network model is completed.
In a possible implementation manner, the target recording image is preprocessed to obtain a support data set including the target recording image, wherein the support data set is used for amplifying normal sample image data of the monitoring point to obtain normal sample image data of different scales. The preprocessing includes, but is not limited to, optical deformation, geometric deformation, and color transformation.
As a possible implementation manner of the present application, as shown in fig. 5, in the method for detecting a running hazard and a drip leakage provided in the embodiment of the present application, the step of inputting the target inspection image and the target recording image to a neural network model trained in advance to obtain an image feature variation value of the target inspection image relative to the target recording image includes:
c1: and respectively carrying out feature extraction on the target inspection image and the support data set to obtain target inspection feature images corresponding to the target inspection image and recording point feature images with different scales corresponding to the support data set.
C2: and carrying out image fusion on the recording point characteristic images with different scales to obtain a target recording point characteristic image. Specific algorithms for image fusion are referred to in the prior art and are not described in detail herein.
And C3: and calculating the mahalanobis distance between the target inspection feature image and the target recording point feature image. The mahalanobis distance (Mahalanobis distance) represents the covariance distance of the data.
And C4: and determining an image characteristic change value of the target patrol image relative to the target record image based on the mahalanobis distance. In this embodiment, the mahalanobis distance value is an image feature variation value.
In some embodiments, the feature extraction network extracts high-level semantic features of the recording point image and the target inspection image respectively, and in order to detect the running and leaking of different scales, the outputs of the spatial transformation modules in the feature extraction network are fused to form respective feature images, then the mahalanobis distance between the target inspection feature image and the target recording point feature image is calculated, the change value corresponding to each coordinate in the feature image is calculated, the change score image corresponding to the feature image can be obtained, and then the change score image of the final inspection image compared with the normal recording point image can be obtained by up-sampling the score image.
S103: and detecting the running and leaking of the target monitoring point based on the image characteristic change value.
As a possible implementation manner of the present application, fig. 6 shows a specific implementation flow of the above-mentioned detection of the running-off and leaking of the target monitoring point based on the image feature change value in the embodiment of the present application, which is described in detail below:
d1: and if the image characteristic change value is greater than or equal to a preset change threshold value, determining that the target monitoring point position has running and leaking.
D2: and if the image characteristic change value is smaller than the preset change threshold value, determining that the target monitoring point position does not have running and leaking.
In one possible implementation manner, the running and leaking position is positioned and output based on the pixel points with the image characteristic change value being greater than or equal to the preset change threshold value.
In the embodiment of the application, the corresponding leakage position can be obtained only by the pixel points with the image characteristic change value larger than or equal to the preset change threshold value. If the inspection robot needs to add the monitoring points, a large number of data acquisition and model iteration are not needed, the effective detection of the running, the falling and the leakage of the newly added monitoring points can be completed only by recording the chart of the monitoring points according to the normal flow of the inspection image robot and inspecting according to the normal flow.
Taking an application scenario as an example, after the trunk feature extraction network and parameters of the model training are stored, when the inspection robot inspects the monitoring point, the recording point diagram stored in the process of recording the monitoring point in advance is subjected to illumination, deformation, color and other data augmentation to form a supporting set so as to expand the normal sample form of the monitoring point, then the supporting set and the inspection diagram are respectively input into the trunk feature extraction network to obtain feature diagrams with different scales, the output diagrams of the space transformation networks are subjected to feature fusion to form the feature diagrams, finally the mahalanobis distance between the feature diagrams of the supporting set and the feature diagrams of the inspection diagram is calculated, the distance value is the change value of the inspection diagram compared with the normal recording point diagram, whether the monitoring point has drip leakage or not can be determined according to the change value image, and if the drip leakage exists, the accurate position of the leakage can be output, and the specific leakage position is not required to be inspected manually for the second time.
From the above, in the embodiment of the present application, by acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, inputting the target inspection image and the target recording image to a neural network model trained in advance, to obtain an image feature variation value of the target inspection image relative to the target recording image, where the neural network model is a network model trained based on a normal sample image dataset and used for calculating an image feature variation value, and the normal sample image dataset includes normal sample images without liquid or gas leakage at each monitoring point, that is, the inspection robot records the recording image of the monitoring point normally, and then performs running-off detection on the target monitoring point based on the image feature variation value. According to the scheme, an unsupervised network is adopted, only a normal sample image is needed for model training, no manual shooting or negative sample image manufacturing is needed on site, the trained neural network model is used for obtaining the change of the inspection graph relative to the recorded graph, the running-off and leaking detection is carried out based on the change, the detection can be carried out under a new scene without iteration model parameters, the continuous input of manual acquisition on-site data along with the increase of new monitoring points is avoided, the generalization capability of a deep learning model can be effectively improved, the new scene can be conveniently and rapidly handled, and the running-off and leaking detection efficiency and accuracy of the inspection robot are improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the method for detecting a running-out and a drip described in the above embodiments, fig. 7 shows a block diagram of the apparatus for detecting a running-out and a drip provided in the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiments of the present application are shown.
Referring to fig. 7, the running-out and drip detecting device includes: an image acquisition module unit 71, a characteristic change detection unit 72, a running-off drip detection unit 73, wherein:
the image acquisition module 71 is configured to acquire a target inspection image of a monitoring point and a target recording image of the monitoring point, where the target inspection image is an image photographed by the inspection robot currently inspecting to the monitoring point, and the target recording image is a pre-recorded normal image of the monitoring point;
a feature change detection unit 72, configured to input the target inspection image and the target record image into a neural network model that is trained in advance, and obtain an image feature change value of the target inspection image relative to the target record image, where the neural network model is a network model that is trained based on a normal sample image dataset, and is used to calculate an image feature change value, and the normal sample image dataset includes normal sample images in which each monitoring point has no liquid or gas leakage;
And the running-off and dripping detection unit 73 is used for carrying out running-off and dripping detection on the target monitoring point position based on the image characteristic change value.
As a possible embodiment of the present application, the above-mentioned running-out and drip detecting device further includes:
the module construction unit is used for constructing a neural network model, wherein the neural network model comprises a feature extraction network and an asymmetric comparison twin network, the feature extraction network is used for extracting image features, and the asymmetric comparison twin network is used for outputting image feature change values after comparing the image features;
the sample data acquisition unit is used for acquiring a normal sample image data set and forming normal sample images of the same monitoring site in the normal sample image data set at different moments into a normal sample pair;
the model training unit is used for sequentially inputting the normal sample images in the normal sample pair to the feature extraction network, and extracting the image features of the normal sample images by using the feature extraction network; inputting the extracted image characteristics of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair; and based on the loss, iteratively updating the model parameters of the neural network model until the loss meets a preset threshold value, and obtaining the neural network model after training.
As a possible implementation manner of the present application, the asymmetric alignment twin network includes a first encoding module, a second encoding module and a projection module, where the first encoding module and the second encoding module are in parameter sharing; the model training unit includes:
the encoding module is used for inputting the extracted image characteristics of the normal sample image to the first encoding module and the second encoding module at the same time to obtain a first encoding result and a second encoding result;
the transformation processing module is used for transforming the first coding result and the second coding result by utilizing the projection module respectively;
the matching module is used for matching the second coding result with the transformed first coding result, and the first coding result is matched with the transformed second coding result;
and the loss calculation module is used for calculating the loss of the normal sample pair according to the matching result.
As a possible implementation manner of the present application, the loss calculation module is specifically configured to:
matching the second coding result with the transformed first coding result according to the following formula
Figure BDA0004151783430000181
Figure BDA0004151783430000182
Wherein P is 1 Representing the result output by the projection module after transforming the first coding result, E 2 Representing the second encoding result;
matching the first encoding result with the transformed second encoding result according to the following formula
Figure BDA0004151783430000183
Figure BDA0004151783430000184
Wherein P is 2 Representing the result output by the projection module after transforming the second coding result, E 1 Representing the first encoding result;
the Loss of the normal sample pair is calculated according to the following formula:
Figure BDA0004151783430000185
as one possible embodiment of the present application, the above-described running-out and drip detecting unit 73 includes:
the first detection determining module is used for determining that the target monitoring point position has running and leaking if the image characteristic change value is larger than or equal to a preset change threshold value;
and the second detection and determination module is used for determining that the target monitoring point position does not have running and leaking if the image characteristic change value is smaller than the preset change threshold value.
As a possible implementation manner of the application, the image feature change value specifically includes an image feature change value of each pixel point in the target inspection chart; the above-described running-out drip detecting unit 73 further includes:
and the positioning output module is used for positioning and outputting the leakage position based on the pixel points of which the image characteristic change values are larger than or equal to the preset change threshold.
As a possible embodiment of the present application, the above-mentioned running-out and drip detecting device further includes:
a support set acquisition unit, configured to pre-process the target recording image to obtain a support data set including the target recording image, where the support data set is used to amplify normal sample image data of the monitoring point;
the feature change detection unit includes:
the feature extraction module is used for respectively extracting features of the target inspection image and the support data set to obtain target inspection feature images corresponding to the target inspection image and recording point feature images with different scales corresponding to the support data set;
the image fusion module is used for carrying out image fusion on the point characteristic images with different scales to obtain a target point characteristic image;
the distance calculation module is used for calculating the mahalanobis distance between the target inspection characteristic image and the target recording point characteristic image;
and the change value determining module is used for determining an image characteristic change value of the target patrol image relative to the target record image based on the mahalanobis distance.
From the above, in the embodiment of the present application, by acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, inputting the target inspection image and the target recording image to a neural network model trained in advance, to obtain an image feature variation value of the target inspection image relative to the target recording image, where the neural network model is a network model trained based on a normal sample image dataset and used for calculating an image feature variation value, and the normal sample image dataset includes normal sample images without liquid or gas leakage at each monitoring point, that is, the inspection robot records the recording image of the monitoring point normally, and then performs running-off detection on the target monitoring point based on the image feature variation value. According to the scheme, an unsupervised network is adopted, only a normal sample image is needed for model training, no manual shooting or negative sample image manufacturing is needed on site, the trained neural network model is used for obtaining the change of the inspection graph relative to the recorded graph, the running-off and leaking detection is carried out based on the change, the detection can be carried out under a new scene without iteration model parameters, the continuous input of manual acquisition on-site data along with the increase of new monitoring points is avoided, the generalization capability of a deep learning model can be effectively improved, the new scene can be conveniently and rapidly handled, and the running-off and leaking detection efficiency and accuracy of the inspection robot are improved.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
The present embodiment also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the running-out drip detection methods as shown in fig. 1 to 6.
The embodiment of the application also provides an intelligent device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the steps of any one of the running and leaking detection methods shown in fig. 1 to 6 are realized when the processor executes the computer program.
Embodiments of the present application also provide a computer program product that, when run on a smart device, causes the smart device to perform steps implementing any of the running-out drip detection methods as represented in fig. 1-6.
Fig. 8 is a schematic diagram of a smart device according to an embodiment of the present application. As shown in fig. 8, the smart device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80. The processor 80, when executing the computer program 82, implements the steps of the embodiments of the method for detecting a running leak described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the units 71 to 73 shown in fig. 7.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules/units may be a series of computer readable instruction segments capable of performing a specific function describing the execution of the computer program 82 in the smart device 8.
The intelligent device 8 may be a patrol robot. The smart device 8 may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the smart device 8 and is not meant to be limiting as the smart device 8 may include more or fewer components than shown, or may combine certain components, or different components, e.g., the smart device 8 may also include input-output devices, network access devices, buses, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the smart device 8, such as a hard disk or a memory of the smart device 8. The memory 81 may also be an external storage device of the Smart device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the Smart device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the smart device 8. The memory 81 is used for storing the computer program as well as other programs and data required by the smart device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The method for detecting the running-off and leaking is characterized by being applied to a patrol robot and comprising the following steps of:
acquiring a target inspection image of a monitoring point and a target recording image of the monitoring point, wherein the target inspection image is an image shot from the current inspection of the inspection robot to the monitoring point, and the target recording image is a pre-recorded normal image of the monitoring point;
Inputting the target inspection image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image data set and is used for calculating the image characteristic change value, and the normal sample image data set comprises normal sample images with no liquid or gas leakage at each monitoring point;
and detecting the running and leaking of the target monitoring point based on the image characteristic change value.
2. The method of claim 1, comprising, prior to the step of inputting the target patrol image and the target record image into a pre-trained neural network model:
the method comprises the steps of constructing a neural network model, wherein the neural network model comprises a feature extraction network and an asymmetric comparison twin network, the feature extraction network is used for extracting image features, and the asymmetric comparison twin network is used for outputting image feature change values after comparing the image features;
acquiring a normal sample image data set, and forming normal sample images of different moments of the same monitoring site in the normal sample image data set into a normal sample pair;
Sequentially inputting the normal sample images in the normal sample pair to the feature extraction network, and extracting image features of the normal sample images by using the feature extraction network;
inputting the extracted image characteristics of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair;
and based on the loss, iteratively updating the model parameters of the neural network model until the loss meets a preset threshold value, and obtaining the neural network model after training.
3. The method of claim 2, wherein the asymmetric comparison twin network comprises a first encoding module, a second encoding module, and a projection module, the first encoding module being parametrically shared with the second encoding module;
the step of inputting the extracted image features of the normal sample image into the asymmetric comparison twin network to obtain the loss of the normal sample pair comprises the following steps:
the extracted image features of the normal sample image are simultaneously input to the first coding module and the second coding module, and a first coding result and a second coding result are obtained;
transforming the first coding result and the second coding result by using the projection module respectively;
Matching the second coding result with the transformed first coding result, and matching the first coding result with the transformed second coding result;
and calculating the loss of the normal sample pair according to the matching result.
4. A method according to claim 3, wherein said step of calculating the loss of said normal sample pair from the result of said matching comprises:
matching the second coding result with the transformed first coding result according to the following formula
Figure FDA0004151783410000021
Figure FDA0004151783410000022
Wherein P is 1 Representing the first plaiting by the projection moduleA result output after the code result is transformed, E 2 Representing the second encoding result;
matching the first encoding result with the transformed second encoding result according to the following formula
Figure FDA0004151783410000023
Figure FDA0004151783410000024
Wherein P is 2 Representing the result output by the projection module after transforming the second coding result, E 1 Representing the first encoding result;
the Loss of the normal sample pair is calculated according to the following formula:
Figure FDA0004151783410000025
5. the method of claim 1, wherein the step of detecting the target monitoring point for running-off and drip based on the image feature variation value comprises:
If the image characteristic change value is larger than or equal to a preset change threshold value, determining that the target monitoring point position has running and leaking;
and if the image characteristic change value is smaller than the preset change threshold value, determining that the target monitoring point position does not have running and leaking.
6. The method of claim 5, wherein the image feature variation value specifically includes an image feature variation value of each pixel point in the target inspection map;
after the step of determining that the target monitoring point position has the running and leaking phenomenon if the image characteristic change value is greater than or equal to the preset change threshold value, the method further comprises the following steps:
and positioning and outputting the running and leaking positions based on the pixel points with the image characteristic change value larger than or equal to the preset change threshold.
7. The method according to any one of claims 1 to 6, characterized in that the running-off drip detection method further comprises:
preprocessing the target recording image to obtain a support data set comprising the target recording image, wherein the support data set is used for amplifying normal sample image data of the monitoring point;
the step of inputting the target patrol image and the target recording image to a pre-trained neural network model to obtain an image characteristic change value of the target patrol image relative to the target recording image comprises the following steps:
Respectively extracting features of the target inspection image and the support data set to obtain a target inspection feature image corresponding to the target inspection image and recording point feature images of different scales corresponding to the support data set;
performing image fusion on the recording point characteristic images with different scales to obtain a target recording point characteristic image;
calculating the mahalanobis distance between the target inspection feature image and the target recording point feature image;
and determining an image characteristic change value of the target patrol image relative to the target record image based on the mahalanobis distance.
8. The utility model provides a running-off drip detection device which characterized in that is applied to inspection robot, includes:
the image acquisition module is used for acquiring a target inspection image of a monitoring point and a target recording point image of the monitoring point, wherein the target inspection image is an image shot by the inspection robot when the inspection robot is currently inspecting the monitoring point, and the target recording point image is a pre-recorded normal image of the monitoring point;
the characteristic change detection unit is used for inputting the target inspection image and the target recording image into a pre-trained neural network model to obtain an image characteristic change value of the target inspection image relative to the target recording image, wherein the neural network model is a network model which is trained based on a normal sample image data set and used for calculating the image characteristic change value, and the normal sample image data set comprises normal sample images without liquid or gas leakage at each monitoring point;
And the running-off and dripping detection unit is used for carrying out running-off and dripping detection on the target monitoring point position based on the image characteristic change value.
9. A smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the running drip detection method according to any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the running-out drip detection method according to any one of claims 1 to 7.
CN202310313495.2A 2023-03-22 2023-03-22 Method and device for detecting running and leaking, intelligent equipment and storage medium Pending CN116342538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310313495.2A CN116342538A (en) 2023-03-22 2023-03-22 Method and device for detecting running and leaking, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310313495.2A CN116342538A (en) 2023-03-22 2023-03-22 Method and device for detecting running and leaking, intelligent equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116342538A true CN116342538A (en) 2023-06-27

Family

ID=86875809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310313495.2A Pending CN116342538A (en) 2023-03-22 2023-03-22 Method and device for detecting running and leaking, intelligent equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116342538A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116841301A (en) * 2023-09-01 2023-10-03 杭州义益钛迪信息技术有限公司 Inspection robot inspection model training method, device, equipment and medium
CN117710374A (en) * 2024-02-05 2024-03-15 中海油田服务股份有限公司 Method, device, equipment and medium for detecting running and leaking based on deep learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116841301A (en) * 2023-09-01 2023-10-03 杭州义益钛迪信息技术有限公司 Inspection robot inspection model training method, device, equipment and medium
CN116841301B (en) * 2023-09-01 2024-01-09 杭州义益钛迪信息技术有限公司 Inspection robot inspection model training method, device, equipment and medium
CN117710374A (en) * 2024-02-05 2024-03-15 中海油田服务股份有限公司 Method, device, equipment and medium for detecting running and leaking based on deep learning

Similar Documents

Publication Publication Date Title
Ukhwah et al. Asphalt pavement pothole detection using deep learning method based on YOLO neural network
Li et al. Automatic pixel‐level multiple damage detection of concrete structure using fully convolutional network
CN116342538A (en) Method and device for detecting running and leaking, intelligent equipment and storage medium
CN110378222B (en) Method and device for detecting vibration damper target and identifying defect of power transmission line
CN110516514B (en) Modeling method and device of target detection model
US20200402221A1 (en) Inspection system, image discrimination system, discrimination system, discriminator generation system, and learning data generation device
CN115375999B (en) Target detection model, method and device applied to hazardous chemical vehicle detection
CN116416497A (en) Bearing fault diagnosis system and method
CN115861210A (en) Transformer substation equipment abnormity detection method and system based on twin network
CN113012107B (en) Power grid defect detection method and system
CN112597996B (en) Method for detecting traffic sign significance in natural scene based on task driving
CN116630286B (en) Method, device, equipment and storage medium for detecting and positioning image abnormality
CN117541534A (en) Power transmission line inspection method based on unmanned plane and CNN-BiLSTM model
CN109934151B (en) Face detection method based on movidius computing chip and Yolo face
CN116310850A (en) Remote sensing image target detection method based on improved RetinaNet
CN115546223A (en) Method and system for detecting loss of fastening bolt of equipment under train
CN113344002B (en) Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
CN114387266A (en) Training method, device, equipment and storage medium for tubercle bacillus detection model
CN115272826A (en) Image identification method, device and system based on convolutional neural network
CN115082650A (en) Implementation method of automatic pipeline defect labeling tool based on convolutional neural network
CN111931920A (en) Target detection method, device and storage medium based on cascade neural network
CN113034432A (en) Product defect detection method, system, device and storage medium
CN112085756B (en) Road image multi-scale edge detection model and method based on residual error network
CN117292266B (en) Method and device for detecting concrete cracks of main canal of irrigation area and storage medium
CN117437460A (en) Air quality intelligent rating method and system based on environmental characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination