CN114119594A - Oil leakage detection method and device based on deep learning - Google Patents

Oil leakage detection method and device based on deep learning Download PDF

Info

Publication number
CN114119594A
CN114119594A CN202111472519.6A CN202111472519A CN114119594A CN 114119594 A CN114119594 A CN 114119594A CN 202111472519 A CN202111472519 A CN 202111472519A CN 114119594 A CN114119594 A CN 114119594A
Authority
CN
China
Prior art keywords
training sample
target
frame
initial image
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111472519.6A
Other languages
Chinese (zh)
Inventor
田际
孙伟生
冯庭有
蔡承伟
单婕
江志宏
黄欢
颜景博
袁方雅
杨连凯
张龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaneng Dongguan Gas Turbine Thermal Power Co Ltd
Original Assignee
Huaneng Dongguan Gas Turbine Thermal Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaneng Dongguan Gas Turbine Thermal Power Co Ltd filed Critical Huaneng Dongguan Gas Turbine Thermal Power Co Ltd
Priority to CN202111472519.6A priority Critical patent/CN114119594A/en
Publication of CN114119594A publication Critical patent/CN114119594A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention provides an oil leakage detection method and device based on deep learning, wherein the method comprises the following steps: acquiring an initial image training sample; performing data amplification on the initial image training sample to obtain a target training sample; sampling a target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels; fusing the information characteristics of each characteristic channel to obtain at least four enhanced multi-scale characteristics; carrying out classification loss calculation and positioning loss calculation on the enhanced multi-scale features to respectively obtain the classification probability and the positioning loss of the multi-scale features; calculating a function according to the classification probability, the positioning loss and a preset total loss to obtain a loss function of the multi-scale feature; performing model training according to the loss function to obtain a target detection model; and inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model. The invention can accurately detect the condition of the abnormal oil leakage state.

Description

Oil leakage detection method and device based on deep learning
Technical Field
The invention relates to the technical field of oil leakage detection, in particular to an oil leakage detection method and device based on deep learning.
Background
The existing oil leakage and water leakage detection is mainly divided into two types, namely a detection method based on a sensor and a detection method based on computer vision.
The essence of the machine vision-based leakage detection method is to perform feature extraction of manual operators (color and texture) on an image. The traditional method for extracting the characteristic operator has great influence on environmental factors such as illumination intensity change, slight shadow shielding, background change and the like. In addition, the performance effect is often poor, and detection omission and false detection are caused.
The technical method based on multi-channel feature fusion strengthens the feature of the generation classification although various features are extracted and fused in various ways. However, the information including the HOG channel (acquiring edge information according to the gradient), LUV color space characteristics, etc. is influenced by the environment greatly, and the acquisition of the edge information is influenced by the change of the drop-leakage shape intuitively, which is a great test for the robustness of the algorithm. In addition, the SVM classifier has high spatial complexity and time complexity in the face of training of large samples, and lacks sensitivity to data. The performance of the method is often dependent on the selection of the kernel function and the hyper-parameter, but the detection result does not contain positioning information.
In summary, the existing oil leakage and water leakage detection method has the problem of low detection accuracy.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an oil leakage detection method and a detection device based on deep learning, which can accurately detect the abnormal state of oil leakage.
One embodiment of the invention provides an oil leakage detection method based on deep learning, which comprises the following steps:
acquiring an initial image training sample; the initial image training sample comprises an oil penetration position framed by an image target frame;
performing data amplification on the initial image training sample to obtain a target training sample;
sampling the target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels;
fusing the information characteristics of each characteristic channel to obtain at least four enhanced multi-scale characteristics;
carrying out classification loss calculation and positioning loss calculation on the enhanced multi-scale features to respectively obtain the classification probability and the positioning loss of the multi-scale features;
calculating a function according to the classification probability, the positioning loss and a preset total loss to obtain a loss function of the multi-scale feature;
performing model training according to the loss function to obtain a target detection model;
and inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model.
Compared with the prior art, the oil leakage detection method based on deep learning obtains the enhanced multi-scale features by fusing the information features of the feature channels, performs loss function calculation on the enhanced multi-scale features to train the initial high-resolution model so as to obtain the target detection model, and uses the target detection model to predict the oil leakage condition in the image to be detected, so that the accuracy of the oil leakage abnormal state detection result can be improved.
Further, the process of performing data augmentation on the initial image training sample to obtain a target training sample includes:
acquiring random points in the range of the image target frame;
generating a target circle by taking the random point as a circle center and taking the shortest distance from the random point to the frame of the image target frame as a radius; the target source is used for shielding the oil penetration position;
and determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
And generating the target circle for shielding the oil penetration position according to the random points, and determining an initial image training sample after the target circle is generated as a target training sample, so that the number of the target training samples is increased, and the variability of the target training samples is improved.
Further, the process of acquiring a random point within the range of the image target frame includes:
acquiring the abscissa of the random point, wherein the process is as follows:
Figure 100002_DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE004
is the abscissa of the random point and is,
Figure 100002_DEST_PATH_IMAGE006
is the abscissa of the center point of the image target frame,
Figure 100002_DEST_PATH_IMAGE008
the width of the image target frame;
acquiring the vertical coordinate of the random point, wherein the process is as follows:
Figure 100002_DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE012
is the ordinate of the random point and is,
Figure 100002_DEST_PATH_IMAGE014
is the ordinate of the center point of the image target frame,
Figure 100002_DEST_PATH_IMAGE016
is the height of the image target frame.
And obtaining the random points through the formula, and improving the variability of the random points.
Further, the process of performing data augmentation on the initial image training sample to obtain a target training sample includes:
respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
and determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as a target training sample.
And the initial image training samples respectively subjected to rotation, horizontal turnover and contrast adjustment are also determined as target training samples, so that the number of the target training samples is increased.
Further, the process of fusing the information features of each feature channel to obtain at least four enhanced multi-scale features is as follows:
Figure 100002_DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE020
for the enhanced multi-scale features as described,
Figure 100002_DEST_PATH_IMAGE022
is as follows
Figure 100002_DEST_PATH_IMAGE024
The information characteristics of the individual characteristic channels are,
Figure 100002_DEST_PATH_IMAGE026
is as follows
Figure 100002_DEST_PATH_IMAGE028
The information characteristics of the individual characteristic channels are,
Figure 100002_DEST_PATH_IMAGE030
is the total number of the characteristic channels.
And fusing each characteristic channel through the formula to obtain the enhanced multi-scale characteristic.
Further, the performing classification loss calculation and positioning loss calculation on the enhanced multi-scale features to obtain classification probability and positioning loss of the multi-scale features respectively includes:
obtaining a classification prediction, wherein the classification prediction comprises two categories, and the process of performing classification loss calculation on the enhanced multi-scale features is as follows:
Figure 100002_DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE034
in order to be a loss of said classification,
Figure 100002_DEST_PATH_IMAGE036
expressed as the matching coefficient of the predicted label and the real label frame;
Figure 100002_DEST_PATH_IMAGE038
in order to predict the category of the video,
Figure 100002_DEST_PATH_IMAGE040
is as follows
Figure 100002_DEST_PATH_IMAGE042
Class value of class prediction output;
Figure 100002_DEST_PATH_IMAGE044
is as follows
Figure 84744DEST_PATH_IMAGE042
The probability of a class prediction is determined,
Figure 100002_DEST_PATH_IMAGE046
is the probability of a wrong prediction being the background,
Figure 100002_DEST_PATH_IMAGE048
a number of positive samples;
Figure 100002_DEST_PATH_IMAGE050
negative number of samples;
Figure 100002_DEST_PATH_IMAGE052
is the total number of samples, and has a value of
Figure 218528DEST_PATH_IMAGE048
And
Figure 296468DEST_PATH_IMAGE050
the sum of (1);
Figure 100002_DEST_PATH_IMAGE054
to indicate to
Figure 285153DEST_PATH_IMAGE042
Individual predictive labels and categories
Figure 100002_DEST_PATH_IMAGE056
To (1) a
Figure 100002_DEST_PATH_IMAGE058
If the matching coefficient of each real label frame is consistent, the prediction is correct, and the value is 1, otherwise, the prediction is wrong, and the value is 0;
and performing positioning loss calculation on the enhanced multi-scale features, wherein the process is as follows:
Figure DEST_PATH_DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE062
Figure 100002_DEST_PATH_IMAGE064
Figure 100002_DEST_PATH_IMAGE066
Figure 100002_DEST_PATH_IMAGE068
wherein
Figure 100002_DEST_PATH_IMAGE070
The matching coefficient is represented by a number of words,
Figure 100002_DEST_PATH_IMAGE072
represented as a collection of predicted rectangular boxes,
Figure 100002_DEST_PATH_IMAGE074
represented as a collection of true rectangular boxes, each containing four elements
Figure 100002_DEST_PATH_IMAGE076
,
Figure 100002_DEST_PATH_IMAGE078
,
Figure 100002_DEST_PATH_IMAGE080
,
Figure 100002_DEST_PATH_IMAGE082
) Wherein
Figure 408180DEST_PATH_IMAGE076
Is the abscissa of the center point of the rectangular frame,
Figure 27380DEST_PATH_IMAGE078
is the ordinate of the center point of the rectangular frame,
Figure 592616DEST_PATH_IMAGE080
is the width of the rectangular frame,
Figure 588253DEST_PATH_IMAGE082
is the height of the rectangular frame;
Figure 100002_DEST_PATH_IMAGE084
is shown as indicating
Figure 116187DEST_PATH_IMAGE042
A prediction rectangle frame and the second of class k
Figure 906288DEST_PATH_IMAGE058
If the types of the matching values of the real rectangular frames are consistent, the value is 1, otherwise, the value is 0;
Figure 100002_DEST_PATH_IMAGE086
in order to predict the width of the rectangular box,
Figure 100002_DEST_PATH_IMAGE088
in order to predict the abscissa of the rectangular box,
Figure 100002_DEST_PATH_IMAGE090
the abscissa of the real frame;
Figure 100002_DEST_PATH_IMAGE092
the expected predicted value of the abscissa of the real rectangular frame;
Figure 100002_DEST_PATH_IMAGE094
to predict the ordinate of the center point of the rectangular box,
Figure 100002_DEST_PATH_IMAGE096
in order to predict the height of the rectangular box,
Figure 100002_DEST_PATH_IMAGE098
is the vertical coordinate of the central point of the real rectangular frame,
Figure 100002_DEST_PATH_IMAGE100
is the width of the real rectangular frame,
Figure DEST_PATH_IMAGE102
is the height of the true rectangular box.
Further, the process of obtaining the loss function of the multi-scale feature according to the classification probability, the positioning loss and a preset total loss calculation function is as follows:
Figure DEST_PATH_IMAGE104
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE106
in order to be a predicted coordinate frame,
Figure DEST_PATH_IMAGE108
in order to be a real coordinate frame,
Figure DEST_PATH_IMAGE110
in order to be a preset value, the device is provided with a power supply,
Figure DEST_PATH_IMAGE112
the matching value of the prediction frame and the real label frame is indicated, and the value is {0, 1 };
Figure DEST_PATH_IMAGE114
expressed as a matching value indicating the predicted frame and the real frame, and the value is {0, 1 }.
One embodiment of the present invention further discloses an oil leakage detection device based on deep learning, including: the system comprises an initial training sample acquisition module, a target training sample acquisition module, a sampling module, a fusion module, a feature calculation module, a loss function calculation module, a training module and an execution module;
the initial training sample acquisition module is used for acquiring an initial image training sample; the initial image training sample comprises an oil penetration position framed by an image target frame;
the target training sample acquisition module is used for carrying out data augmentation on the initial image training sample to obtain a target training sample;
the sampling module is used for sampling the target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels;
the fusion module is used for fusing the information characteristics of the characteristic channels to obtain at least four enhanced multi-scale characteristics;
the characteristic calculation module is used for carrying out classification loss calculation and positioning loss calculation on the enhanced multi-scale characteristics to respectively obtain the classification probability and the positioning loss of the multi-scale characteristics;
the loss function calculation module is used for obtaining a loss function of the multi-scale features according to the classification probability, the positioning loss and a preset total loss calculation function;
the training module is used for carrying out model training according to the loss function to obtain a target detection model;
and the execution module is used for inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model.
Compared with the prior art, the oil leakage detection device based on deep learning obtains the enhanced multi-scale features by fusing the information features of the feature channels, performs loss function calculation on the enhanced multi-scale features to train the initial high-resolution model so as to obtain the target detection model, and uses the target detection model to predict the oil leakage condition in the image to be detected, so that the accuracy of the detection result of the abnormal oil leakage state can be improved.
Further, the target training sample acquisition module comprises: the device comprises a random point acquisition module, a target circle acquisition module and a target training sample determination module;
the random point acquisition module is used for acquiring random points in the range of the image target frame;
the target circle obtaining module is used for generating a target circle by taking the random point as a circle center and taking the shortest distance from the random point to the frame of the image target frame as a radius;
the target training sample determining module is used for determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
And generating the target circle for shielding the oil penetration position according to the random points, and determining an initial image training sample after the target circle is generated as a target training sample, so that the number of the target training samples is increased, and the variability of the target training samples is improved.
Further, the target training sample acquisition module comprises: the device comprises an adjusting module and a target training sample determining module;
the adjusting module is used for respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
the target training sample determining module is used for determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as the target training sample.
And the initial image training samples respectively subjected to rotation, horizontal turnover and contrast adjustment are also determined as target training samples, so that the number of the target training samples is increased.
Compared with the prior art, the oil leakage detection method and the oil leakage detection device based on deep learning have the following advantages:
1. compared with the traditional method of manually extracting the features, the method of using the convolutional neural network has stronger capability of extracting the features.
2. By the aid of an augmentation strategy of randomly shielding images, diversity and variation of image targets are increased, and robustness of the algorithm is effectively improved.
3. The high-resolution feature network is combined, the target detection of the context information and the multi-scale features focuses on the oil leakage area, and the detection precision is effectively improved.
In order that the invention may be more clearly understood, specific embodiments thereof will be described hereinafter with reference to the accompanying drawings.
Drawings
Fig. 1 is a flowchart of an oil leakage detection method based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a high resolution model of an oil leakage detection method based on deep learning according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of stages of an oil leakage detection method based on deep learning according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a target circle of the oil leakage detection method based on deep learning according to an embodiment of the invention.
Fig. 5 is a flow chart of the enhanced multi-scale features of the deep learning-based oil leak detection method according to an embodiment of the present invention.
Fig. 6 is a block diagram of an oil leakage detection device based on deep learning according to an embodiment of the present invention.
1. An initial training sample acquisition module; 2. a target training sample acquisition module; 3. a sampling module; 4. a fusion module; 5. a feature calculation module; 6. a loss function calculation module; 7. a training module; 8. and executing the module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination".
Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Please refer to fig. 1, which is a flowchart illustrating a method for detecting oil leakage based on deep learning according to an embodiment of the present invention, including:
s1, obtaining an initial image training sample; the initial image training sample includes oil penetration locations framed by image target frames.
The initial image training sample can be obtained through shooting of a camera, and the oil penetration position in the initial image training sample is framed and selected by the image target frame through data annotation. Preferably, the image target frame is a rectangular frame, and the size and the width-to-height ratio of the image target frame are determined when the data is labeled. For example, a screenshot may be performed through the power plant monitoring device video data to obtain the initial image training sample.
And S2, performing data augmentation on the initial image training sample to obtain a target training sample.
The target training sample comprises the initial image training sample and the initial image training sample subjected to editing, and when the initial image training sample is subjected to editing, the characteristics of the image of the initial image training sample can be changed, so that the initial image training sample can be used as a new training sample.
And S3, sampling the target training sample through the initial high-resolution model to obtain the information characteristics of at least four characteristic channels.
In step S3, the initial high resolution model retains feature sizes of [56,28,14,7 ]. Specifically, in this embodiment, the number of characteristic channels of the initial high-resolution model is four, and the initial high-resolution model is shown in fig. 2, where horizontal arrows represent convolution of 3 × 3 and a BN layer operation; the down arrow represents down-sampling, which involves an average pooling operation with a convolution kernel size of 2; the upward arrow represents the upsampling, which operates as a neighboring interpolation of the bilinear interpolation; and the upsampled and downsampled features are integrated into the same channel number through a convolution kernel layer of 1x 1; while the multiple lines in fig. 2 point to the same feature, a lane-based stitching operation is illustrated.
And S4, fusing the information characteristics of the characteristic channels to obtain at least four enhanced multi-scale characteristics.
The number of multi-scale features corresponds to the number of feature channels.
And S5, performing classification loss calculation and positioning loss calculation on the enhanced multi-scale features to respectively obtain the classification probability and the positioning loss of the multi-scale features.
Wherein the classification loss calculation and the localization loss calculation are performed by two convolution layers of 3 × 3, respectively.
And S6, calculating a function according to the classification probability, the positioning loss and a preset total loss to obtain a loss function of the multi-scale features.
And S7, performing model training according to the loss function to obtain a target detection model.
Preferably, after the target detection model is obtained, the target detection model is tested by using an image test sample so as to test the accuracy of a prediction result of the target detection model. Wherein the image test samples are acquired simultaneously and in the same manner as the initial image training samples, and the ratio of the number of the initial image training samples to the number of the image test samples is 9: 1.
the model training refers to training a convolutional neural network, and preferably, the convolutional neural network is a 2D convolutional neural network.
And S8, inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model.
Referring to fig. 3, in the present embodiment, the oil leakage detection method includes a training phase and a prediction phase, wherein steps S2-S7 are both training phases, step S8 is a prediction phase, a data set of the training phase in the drawing represents the initial image training sample, the preprocessing corresponds to the execution content of step S2, and the model training corresponds to the execution content of steps S3-S7; the model of the prediction stage represents the target detection model, the prediction output result represents the step S8, and the decision report is a prediction result report generated to better feed back the prediction result to the user.
Compared with the prior art, the oil leakage detection method provided by the invention has the advantages that the enhanced multi-scale features are obtained by fusing the information features of the feature channels, the loss function calculation is carried out on the enhanced multi-scale features to train the initial high-resolution model, so that the target detection model is obtained, the target detection model is used for predicting the oil leakage condition in the image to be detected, and the accuracy of the oil leakage abnormal state detection result can be improved.
Referring to fig. 4, in a possible embodiment, the step S2 of performing data augmentation on the initial image training samples to obtain target training samples includes:
and S21, acquiring random points in the range of the image target frame.
S22, generating a target circle by taking the random point as a circle center and the shortest distance from the random point to the frame of the image target frame as a radius; the target source is used for shielding the oil penetration position.
Wherein, in view of the fact that oil is usually black and dark black, the target circle is also filled with the mean pixel value of the image in order to better form the effect of shielding the oil penetration position.
It should be noted that, since the radius of the target circle is the shortest distance from the random point to the border of the image target frame, the target circle cannot completely cover the image target frame, that is, the oil penetration position in the image target frame is not completely blocked by the target circle.
And S23, determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
And generating the target circle for shielding the oil penetration position according to the random points, and determining an initial image training sample after the target circle is generated as a target training sample, so that the number of the target training samples is increased, and the variability of the target training samples is improved.
Preferably, the step S21 of obtaining a random point in the range of the image target frame includes:
acquiring the abscissa of the random point, wherein the process is as follows:
Figure DEST_PATH_IMAGE002A
wherein the content of the first and second substances,
Figure 1896DEST_PATH_IMAGE004
is the abscissa of the random point and is,
Figure 801224DEST_PATH_IMAGE006
is the abscissa of the center point of the image target frame,
Figure 121347DEST_PATH_IMAGE008
the width of the image target frame;
acquiring the vertical coordinate of the random point, wherein the process is as follows:
Figure DEST_PATH_IMAGE010A
wherein the content of the first and second substances,
Figure 463377DEST_PATH_IMAGE012
is the ordinate of the random point and is,
Figure 501741DEST_PATH_IMAGE014
is the ordinate of the center point of the image target frame,
Figure 839181DEST_PATH_IMAGE016
is the height of the image target frame.
And obtaining the random points through the formula, and improving the variability of the random points.
In a possible embodiment, the process of performing data augmentation on the initial image training sample to obtain a target training sample includes:
respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
and determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as a target training sample.
Preferably, the contrast adjustment includes two processing modes of contrast enhancement and contrast reduction.
In this embodiment, the initial image training samples after being respectively rotated, horizontally flipped, and adjusted in contrast are also determined as target training samples, so that the number of the target training samples is increased.
In the two embodiments of the present invention, 5 preprocessing methods are performed on the initial image training sample, so that the obtained target training sample is 6 times of the initial image training sample.
In a possible embodiment, the process of fusing the information features of the feature channels to obtain at least four enhanced multi-scale features is as follows:
Figure DEST_PATH_IMAGE115
wherein the content of the first and second substances,
Figure 843171DEST_PATH_IMAGE020
for the enhanced multi-scale features as described,
Figure 630868DEST_PATH_IMAGE022
is as follows
Figure 717379DEST_PATH_IMAGE024
The information characteristics of the individual characteristic channels are,
Figure 124089DEST_PATH_IMAGE026
is as follows
Figure 684384DEST_PATH_IMAGE028
The information characteristics of the individual characteristic channels are,
Figure 754233DEST_PATH_IMAGE030
is the total number of the characteristic channels.
In this embodiment, each feature channel is fused by the above formula, so as to obtain an enhanced multi-scale feature. Referring to fig. 5, fig. 5 illustrates an enhanced multi-scale feature obtained by taking the F2 feature channel as an example.
In a possible embodiment, the performing classification loss calculation and positioning loss calculation on the enhanced multi-scale features to obtain a classification probability and a positioning loss of the multi-scale features respectively includes:
obtaining a classification prediction, wherein the classification prediction comprises two categories, and the process of performing classification loss calculation on the enhanced multi-scale features is as follows:
Figure DEST_PATH_IMAGE116
wherein the content of the first and second substances,
Figure 563926DEST_PATH_IMAGE034
in order to be a loss of said classification,
Figure 243169DEST_PATH_IMAGE036
expressed as the matching coefficient of the predicted label and the real label frame;
Figure 392391DEST_PATH_IMAGE038
in order to predict the category of the video,
Figure 630212DEST_PATH_IMAGE040
is as follows
Figure 864884DEST_PATH_IMAGE042
Class value of class prediction output;
Figure 347818DEST_PATH_IMAGE044
is as follows
Figure 617125DEST_PATH_IMAGE042
The probability of a class prediction is determined,
Figure 527313DEST_PATH_IMAGE046
is the probability of a wrong prediction being the background,
Figure 750746DEST_PATH_IMAGE048
a number of positive samples;
Figure 506212DEST_PATH_IMAGE050
negative number of samples;
Figure 630026DEST_PATH_IMAGE052
is the total number of samples, and has a value of
Figure 711115DEST_PATH_IMAGE048
And
Figure 654800DEST_PATH_IMAGE050
the sum of (1);
Figure 479536DEST_PATH_IMAGE054
to indicate to
Figure 956392DEST_PATH_IMAGE042
Individual predictive labels and categories
Figure 942802DEST_PATH_IMAGE056
To (1) a
Figure 373784DEST_PATH_IMAGE058
If the matching coefficient of each real label frame is consistent, the prediction is correct, and the value is 1, otherwise, the prediction is wrong, and the value is 0;
and performing positioning loss calculation on the enhanced multi-scale features, wherein the process is as follows:
Figure 917599DEST_PATH_DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 831572DEST_PATH_IMAGE062
Figure 195557DEST_PATH_IMAGE064
Figure 352869DEST_PATH_IMAGE066
Figure 536726DEST_PATH_IMAGE068
wherein
Figure 703265DEST_PATH_IMAGE070
The matching coefficient is represented by a number of words,
Figure 889133DEST_PATH_IMAGE072
represented as a collection of predicted rectangular boxes,
Figure 482926DEST_PATH_IMAGE074
represented as a collection of true rectangular boxes, each containing four elements
Figure 888499DEST_PATH_IMAGE076
,
Figure 593150DEST_PATH_IMAGE078
,
Figure 666148DEST_PATH_IMAGE080
,
Figure 165263DEST_PATH_IMAGE082
) Wherein
Figure 559597DEST_PATH_IMAGE076
Is the abscissa of the center point of the rectangular frame,
Figure 802360DEST_PATH_IMAGE078
is the ordinate of the center point of the rectangular frame,
Figure 729865DEST_PATH_IMAGE080
is the width of the rectangular frame,
Figure 399880DEST_PATH_IMAGE082
is the height of the rectangular frame;
Figure DEST_PATH_IMAGE084A
is shown as indicating
Figure 606477DEST_PATH_IMAGE042
A prediction rectangle frame and the second of class k
Figure 652931DEST_PATH_IMAGE058
If the types of the matching values of the real rectangular frames are consistent, the value is 1, otherwise, the value is 0;
Figure 434942DEST_PATH_IMAGE086
in order to predict the width of the rectangular box, in order to predict the abscissa of the rectangular box,
Figure 275859DEST_PATH_IMAGE090
real frameThe abscissa of (a);
Figure 143321DEST_PATH_IMAGE092
the expected predicted value of the abscissa of the real rectangular frame;
Figure 993465DEST_PATH_IMAGE094
to predict the ordinate of the center point of the rectangular box,
Figure 865869DEST_PATH_IMAGE096
in order to predict the height of the rectangular box,
Figure 143266DEST_PATH_IMAGE098
is the vertical coordinate of the central point of the real rectangular frame,
Figure 232445DEST_PATH_IMAGE100
is the width of the real rectangular frame,
Figure 151859DEST_PATH_IMAGE102
is the height of the true rectangular box.
In this embodiment, the classification prediction includes two classes, which are a background class and an oil-liquid class.
Wherein, the selection proportion of the positive and negative samples is 1: 3. the positive samples and the negative samples are pre-acquired samples, the positive samples are samples with correct class prediction, and the negative samples are samples with wrong class prediction.
The prediction label is a prediction category of the enhanced multi-scale feature, and the real label frame is a real category corresponding to the enhanced multi-scale feature.
The prediction box is a rectangular box for positioning and predicting the enhanced multi-scale features; the real frame refers to an image target frame of which the oil penetration position is framed in the enhanced multi-scale features.
Preferably, the loss calculation of the prediction block and the real block adopts L1 loss, and is parameter prediction between the real block and the prediction block.
In a possible embodiment, the process of obtaining the loss function of the multi-scale feature according to the classification probability, the positioning loss and the preset total loss calculation function is as follows:
Figure DEST_PATH_IMAGE104A
wherein the content of the first and second substances,
Figure 469315DEST_PATH_IMAGE106
in order to be a predicted coordinate frame,
Figure 917614DEST_PATH_IMAGE108
in order to be a real coordinate frame,
Figure DEST_PATH_IMAGE117
in order to be a preset value, the device is provided with a power supply,
Figure 290826DEST_PATH_IMAGE112
the matching value of the prediction frame and the real label frame is indicated, and the value is {0, 1 };
Figure 482773DEST_PATH_IMAGE114
expressed as a matching value indicating the predicted frame and the real frame, and the value is {0, 1 }.
Referring to fig. 6, an embodiment of the present invention further discloses an oil leakage detecting device based on deep learning, including: an initial training sample acquisition module 1, a target training sample acquisition module 2, a sampling module 3, a fusion module 4, a feature calculation module 5, a loss function calculation module 6, a training module and an execution module 7;
the initial training sample acquisition module 1 is used for acquiring an initial image training sample; the initial image training sample comprises an oil penetration position framed by an image target frame;
the target training sample acquisition module 2 is used for performing data augmentation on the initial image training sample to obtain a target training sample;
the sampling module 3 is used for sampling the target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels;
the fusion module 4 is configured to fuse the information features of the feature channels to obtain at least four enhanced multi-scale features;
the feature calculation module 5 is configured to perform classification loss calculation and positioning loss calculation on the enhanced multi-scale features to obtain a classification probability and a positioning loss of the multi-scale features respectively;
the loss function calculation module 6 is configured to obtain a loss function of the multi-scale feature according to the classification probability, the positioning loss, and a preset total loss calculation function;
the training module 7 is used for performing model training according to the loss function to obtain a target detection model;
the execution module 8 is configured to input the image to be detected into the target detection model, and obtain a prediction output result of the target detection model.
Compared with the prior art, the oil leakage detection device based on deep learning obtains the enhanced multi-scale features by fusing the information features of the feature channels, performs loss function calculation on the enhanced multi-scale features to train the initial high-resolution model so as to obtain the target detection model, and uses the target detection model to predict the oil leakage condition in the image to be detected, so that the accuracy of the detection result of the abnormal oil leakage state can be improved.
In one possible embodiment, the target training sample acquiring module 2 includes: the device comprises a random point acquisition module, a target circle acquisition module and a target training sample determination module;
the random point acquisition module is used for acquiring random points in the range of the image target frame;
the target circle obtaining module is used for generating a target circle by taking the random point as a circle center and taking the shortest distance from the random point to the frame of the image target frame as a radius;
the target training sample determining module is used for determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
And generating the target circle for shielding the oil penetration position according to the random points, and determining an initial image training sample after the target circle is generated as a target training sample, so that the number of the target training samples is increased, and the variability of the target training samples is improved.
In one possible embodiment, the target training sample acquiring module 2 includes: the device comprises an adjusting module and a target training sample determining module;
the adjusting module is used for respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
the target training sample determining module is used for determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as the target training sample.
And the initial image training samples respectively subjected to rotation, horizontal turnover and contrast adjustment are also determined as target training samples, so that the number of the target training samples is increased.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An oil leakage detection method based on deep learning is characterized by comprising the following steps:
acquiring an initial image training sample; the initial image training sample comprises an oil penetration position framed by an image target frame;
performing data amplification on the initial image training sample to obtain a target training sample;
sampling the target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels;
fusing the information characteristics of each characteristic channel to obtain at least four enhanced multi-scale characteristics;
carrying out classification loss calculation and positioning loss calculation on the enhanced multi-scale features to respectively obtain the classification probability and the positioning loss of the multi-scale features;
calculating a function according to the classification probability, the positioning loss and a preset total loss to obtain a loss function of the multi-scale feature;
performing model training according to the loss function to obtain a target detection model;
and inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model.
2. The method for detecting oil leakage based on deep learning of claim 1, wherein the step of performing data augmentation on the initial image training samples to obtain target training samples comprises:
acquiring random points in the range of the image target frame;
generating a target circle by taking the random point as a circle center and taking the shortest distance from the random point to the frame of the image target frame as a radius; the target source is used for shielding the oil penetration position;
and determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
3. The method for detecting oil leakage based on deep learning of claim 1, wherein the process of acquiring random points within the range of the image target frame comprises:
acquiring the abscissa of the random point, wherein the process is as follows:
Figure DEST_PATH_IMAGE002
the width of the image target frame is the horizontal coordinate of the random point, the horizontal coordinate of the central point of the image target frame and the width of the image target frame;
acquiring the vertical coordinate of the random point, wherein the process is as follows:
Figure DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE006
and the vertical coordinate of the random point, the vertical coordinate of the central point of the image target frame, and the height of the image target frame.
4. The method for detecting oil leakage based on deep learning of claim 1, wherein the step of performing data augmentation on the initial image training samples to obtain target training samples comprises:
respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
and determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as a target training sample.
5. The method for detecting oil leakage based on deep learning of claim 1, wherein the process of fusing the information features of each feature channel to obtain at least four enhanced multi-scale features is as follows:
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE010
for the enhanced multi-scale features as described,
Figure DEST_PATH_IMAGE012
is the information characteristic of the a-th characteristic channel,
Figure DEST_PATH_IMAGE014
is the information characteristic of the b-th characteristic channel, and n is the total number of the characteristic channels.
6. The method for detecting oil leakage based on deep learning of claim 1, wherein the performing classification loss calculation and positioning loss calculation on the enhanced multi-scale features to obtain classification probability and positioning loss of the multi-scale features respectively comprises:
obtaining a classification prediction, wherein the classification prediction comprises two categories, and the process of performing classification loss calculation on the enhanced multi-scale features is as follows:
Figure DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
for the classification loss, x is expressed as a matching coefficient of the prediction label and a real label frame; c is a prediction category, and c is a prediction category,
Figure DEST_PATH_IMAGE020
is as follows
Figure DEST_PATH_IMAGE022
Class value of class prediction output;
Figure DEST_PATH_IMAGE024
is as follows
Figure DEST_PATH_IMAGE026
The probability of a class prediction is determined,
Figure DEST_PATH_IMAGE028
is the probability of a wrong prediction being the background,
Figure DEST_PATH_IMAGE030
a number of positive samples;
Figure DEST_PATH_IMAGE032
negative number of samples; n is the total number of samples, and has a value of
Figure DEST_PATH_IMAGE034
And
Figure DEST_PATH_IMAGE036
the sum of (1);
Figure DEST_PATH_IMAGE038
to indicate to
Figure DEST_PATH_IMAGE040
Individual predictive labels and categories
Figure DEST_PATH_IMAGE042
To (1) a
Figure DEST_PATH_IMAGE044
If the matching coefficient of each real label frame is consistent, the prediction is correct, and the value is 1, otherwise, the prediction is wrong, and the value is 0;
and performing positioning loss calculation on the enhanced multi-scale features, wherein the process is as follows:
Figure DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE048
Figure DEST_PATH_IMAGE050
Figure DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE054
wherein b represents a matching coefficient, l represents a set of predicted rectangular frames, and g represents a set of real rectangular frames, each frame comprises four elements
Figure DEST_PATH_IMAGE056
Wherein
Figure DEST_PATH_IMAGE058
Is the abscissa of the center point of the rectangular frame,
Figure DEST_PATH_IMAGE060
is the ordinate of the center point of the rectangular frame,
Figure DEST_PATH_IMAGE062
is the width of the rectangular frame,
Figure DEST_PATH_IMAGE064
is the height of the rectangular frame;
Figure DEST_PATH_IMAGE066
is shown as indicating
Figure DEST_PATH_IMAGE068
A prediction rectangle frame and the second of class k
Figure DEST_PATH_IMAGE070
If the types of the matching values of the real rectangular frames are consistent, the value is 1, otherwise, the value is 0;
Figure DEST_PATH_IMAGE072
in order to predict the width of the rectangular box,
Figure DEST_PATH_IMAGE074
in order to predict the abscissa of the rectangular box,
Figure DEST_PATH_IMAGE076
the abscissa of the real frame;
Figure DEST_PATH_IMAGE078
the expected predicted value of the abscissa of the real rectangular frame;
Figure DEST_PATH_IMAGE080
to predict the ordinate of the center point of the rectangular box,
Figure DEST_PATH_IMAGE082
in order to predict the height of the rectangular box,
Figure DEST_PATH_IMAGE084
is the vertical coordinate of the central point of the real rectangular frame,
Figure DEST_PATH_IMAGE086
is the width of the real rectangular frame,
Figure DEST_PATH_IMAGE088
is the height of the true rectangular box.
7. The method for detecting oil leakage based on deep learning of claim 6, wherein the process of obtaining the loss function of the multi-scale feature according to the classification probability, the positioning loss and the preset total loss calculation function is as follows:
Figure DEST_PATH_IMAGE090
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE092
in order to be a predicted coordinate frame,
Figure DEST_PATH_IMAGE094
in order to be a real coordinate frame,
Figure DEST_PATH_IMAGE096
in order to be a preset value, the device is provided with a power supply,
Figure DEST_PATH_IMAGE098
the matching value of the prediction frame and the real label frame is indicated, and the value is {0, 1 };
Figure DEST_PATH_IMAGE100
expressed as a matching value indicating the predicted frame and the real frame, and the value is {0, 1 }.
8. The utility model provides an fluid seepage detection device based on degree of depth study which characterized in that includes: the system comprises an initial training sample acquisition module, a target training sample acquisition module, a sampling module, a fusion module, a feature calculation module, a loss function calculation module, a training module and an execution module;
the initial training sample acquisition module is used for acquiring an initial image training sample; the initial image training sample comprises an oil penetration position framed by an image target frame;
the target training sample acquisition module is used for carrying out data augmentation on the initial image training sample to obtain a target training sample;
the sampling module is used for sampling the target training sample through an initial high-resolution model to obtain information characteristics of at least four characteristic channels;
the fusion module is used for fusing the information characteristics of the characteristic channels to obtain at least four enhanced multi-scale characteristics;
the characteristic calculation module is used for carrying out classification loss calculation and positioning loss calculation on the enhanced multi-scale characteristics to respectively obtain the classification probability and the positioning loss of the multi-scale characteristics;
the loss function calculation module is used for obtaining a loss function of the multi-scale features according to the classification probability, the positioning loss and a preset total loss calculation function;
the training module is used for carrying out model training according to the loss function to obtain a target detection model;
and the execution module is used for inputting the image to be detected into the target detection model to obtain a prediction output result of the target detection model.
9. The deep learning-based oil leakage detection device according to claim 8, wherein the target training sample acquisition module comprises: the device comprises a random point acquisition module, a target circle acquisition module and a target training sample determination module;
the random point acquisition module is used for acquiring random points in the range of the image target frame;
the target circle obtaining module is used for generating a target circle by taking the random point as a circle center and taking the shortest distance from the random point to the frame of the image target frame as a radius;
the target training sample determining module is used for determining the initial image training sample before the target circle is generated and the initial image training sample after the target circle is generated as a target training sample.
10. The deep learning-based oil leakage detection device according to claim 8, wherein the target training sample acquisition module comprises: the device comprises an adjusting module and a target training sample determining module;
the adjusting module is used for respectively rotating, horizontally turning and adjusting the contrast of the initial image training sample to obtain a rotated initial image training sample, a horizontally turned initial image training sample and a contrast-adjusted initial image training sample;
the target training sample determining module is used for determining the initial image training sample, the rotated initial image training sample, the horizontally turned initial image training sample and the initial image training sample after the contrast adjustment as the target training sample.
CN202111472519.6A 2021-12-06 2021-12-06 Oil leakage detection method and device based on deep learning Pending CN114119594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111472519.6A CN114119594A (en) 2021-12-06 2021-12-06 Oil leakage detection method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111472519.6A CN114119594A (en) 2021-12-06 2021-12-06 Oil leakage detection method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN114119594A true CN114119594A (en) 2022-03-01

Family

ID=80367021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111472519.6A Pending CN114119594A (en) 2021-12-06 2021-12-06 Oil leakage detection method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN114119594A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457297A (en) * 2022-08-23 2022-12-09 中国航空油料集团有限公司 Method and device for detecting oil leakage of aviation oil depot and aviation oil safety operation and maintenance system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705601A (en) * 2019-09-09 2020-01-17 安徽继远软件有限公司 Transformer substation equipment oil leakage image identification method based on single-stage target detection
CN111950517A (en) * 2020-08-26 2020-11-17 司马大大(北京)智能系统有限公司 Target detection method, model training method, electronic device and storage medium
CN112036463A (en) * 2020-08-26 2020-12-04 国家电网有限公司 Power equipment defect detection and identification method based on deep learning
CN113610087A (en) * 2021-06-30 2021-11-05 国网福建省电力有限公司信息通信分公司 Image small target detection method based on prior super-resolution and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705601A (en) * 2019-09-09 2020-01-17 安徽继远软件有限公司 Transformer substation equipment oil leakage image identification method based on single-stage target detection
CN111950517A (en) * 2020-08-26 2020-11-17 司马大大(北京)智能系统有限公司 Target detection method, model training method, electronic device and storage medium
CN112036463A (en) * 2020-08-26 2020-12-04 国家电网有限公司 Power equipment defect detection and identification method based on deep learning
CN113610087A (en) * 2021-06-30 2021-11-05 国网福建省电力有限公司信息通信分公司 Image small target detection method based on prior super-resolution and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KE SUN 等: "Deep High-Resolution Representation Learning for Human Pose Estimation", 《ARXIV:1902.09212V1》 *
李福进 等: "基于特征金字塔SSD的行人检测算法", 《华北理工大学学报(自然科学版)》 *
李辉 等: "面向输电线路的异常目标检测方法", 《计算机与现代化》 *
武建华 等: "基于深度学习的变电站设备油液渗漏检测识别", 《广东电力》 *
汪宋 等: "SSD(Single Shot MultiBox Detector)目标检测算法的研究与改进", 《工业控制计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457297A (en) * 2022-08-23 2022-12-09 中国航空油料集团有限公司 Method and device for detecting oil leakage of aviation oil depot and aviation oil safety operation and maintenance system
CN115457297B (en) * 2022-08-23 2023-09-26 中国航空油料集团有限公司 Oil leakage detection method and device for aviation oil depot and aviation oil safety operation and maintenance system

Similar Documents

Publication Publication Date Title
CN110689037B (en) Method and system for automatic object annotation using deep networks
CN105868758B (en) method and device for detecting text area in image and electronic equipment
CN109977191B (en) Problem map detection method, device, electronic equipment and medium
CN110826416A (en) Bathroom ceramic surface defect detection method and device based on deep learning
Toh et al. Automated fish counting using image processing
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN111091123A (en) Text region detection method and equipment
CN111310826B (en) Method and device for detecting labeling abnormality of sample set and electronic equipment
CA3136674C (en) Methods and systems for crack detection using a fully convolutional network
CN111311556B (en) Mobile phone defect position identification method and equipment
CN110288612B (en) Nameplate positioning and correcting method and device
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111325717A (en) Mobile phone defect position identification method and equipment
CN116843999B (en) Gas cylinder detection method in fire operation based on deep learning
CN110689134A (en) Method, apparatus, device and storage medium for performing machine learning process
CN108664970A (en) A kind of fast target detection method, electronic equipment, storage medium and system
CN112102141B (en) Watermark detection method, watermark detection device, storage medium and electronic equipment
CN109919149A (en) Object mask method and relevant device based on object detection model
CN112906816A (en) Target detection method and device based on optical differential and two-channel neural network
CN110879972B (en) Face detection method and device
CN114119594A (en) Oil leakage detection method and device based on deep learning
CN113160176B (en) Defect detection method and device
CN113920434A (en) Image reproduction detection method, device and medium based on target
CN116433634A (en) Industrial image anomaly detection method based on domain self-adaption
CN114926675A (en) Method and device for detecting shell stain defect, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220301

RJ01 Rejection of invention patent application after publication