CN116434100A - Liquid leakage detection method, device, system and machine-readable storage medium - Google Patents

Liquid leakage detection method, device, system and machine-readable storage medium Download PDF

Info

Publication number
CN116434100A
CN116434100A CN202111682398.8A CN202111682398A CN116434100A CN 116434100 A CN116434100 A CN 116434100A CN 202111682398 A CN202111682398 A CN 202111682398A CN 116434100 A CN116434100 A CN 116434100A
Authority
CN
China
Prior art keywords
tensor
target video
liquid leakage
key frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111682398.8A
Other languages
Chinese (zh)
Inventor
谭昆
田琨
王顺义
吴顺成
丁胜夺
刘洪太
王辉
阎红巧
樊志强
杜志虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petrochina Co Ltd
CNPC Research Institute of Safety and Environmental Technology Co Ltd
Original Assignee
Petrochina Co Ltd
CNPC Research Institute of Safety and Environmental Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petrochina Co Ltd, CNPC Research Institute of Safety and Environmental Technology Co Ltd filed Critical Petrochina Co Ltd
Priority to CN202111682398.8A priority Critical patent/CN116434100A/en
Publication of CN116434100A publication Critical patent/CN116434100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a liquid leakage detection method, a device, a system and a machine-readable storage medium, wherein the method comprises the following steps: acquiring a difference image according to the acquired target video data; determining an input original tensor according to a first color channel corresponding to a first frame image and a second color channel corresponding to a difference image in target video data; inputting the input original tensor into a detection model to obtain a first tensor; acquiring a first weight related to the channel and a second weight related to the position according to the first tensor and the detection model; correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor; finally, determining a detection result according to the target tensor; the detection model is built based on historical video data. By the method, the accuracy of the detection result can be improved.

Description

Liquid leakage detection method, device, system and machine-readable storage medium
Technical Field
The present invention relates to a liquid leakage detection method, a device, a system and a machine-readable storage medium, and more particularly, to a liquid leakage detection method, a device, a system and a machine-readable storage medium.
Background
During storage and transport of liquid substances, abnormal situations of liquid leakage may occur in the transport pipeline or the storage device. If a leak condition occurs, serious consequences may occur. For example, when crude oil leaks, serious accidents such as fire explosion are easily caused, and environmental pollution is caused, so that the safety of people is jeopardized. Therefore, in order to reduce the probability of accidents, it is necessary to frequently check equipment and pipelines for storing and transporting liquid, and to identify whether the equipment and the pipelines have liquid leakage.
The universal detection network model in the prior art is used for realizing the identification of the liquid area, and the transparency of the liquid and the continuous change of the liquid area can cause that the difference between the leaked liquid area and the background is smaller, so that the identification effect is poorer.
Disclosure of Invention
In view of the above, the present invention provides a liquid leakage detection method, device, system and machine-readable storage medium, so as to improve the accuracy of the detection result.
The liquid leakage detection method of the invention comprises the following steps:
according to the obtained target video data, comparing the difference between any two target video frames in the target video data to obtain a difference image;
Determining an input original tensor according to a first color channel corresponding to a first frame image in the target video data and a second color channel corresponding to the difference image;
inputting the input original tensor into a detection model, and outputting the detection result; the detection model is built according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
Further, the step of comparing differences between any two target video frames in the target video data according to the obtained target video data to obtain a difference image includes:
acquiring target video data to be detected in a target time period;
according to a preset time interval, selecting at least two target video frames from the target video data as a first key frame and a second key frame, wherein the time point corresponding to the first key frame is earlier than the time point corresponding to the second key frame;
acquiring the difference image according to the first key frame and the second key frame;
wherein when the target video frames are two, determining the two target video frames as the first key frame and the second key frame;
And when the target video frames are larger than two, comparing differences between any two adjacent target video frames, and determining the two adjacent target video frames with the largest differences as the first key frame and the second key frame.
Further, when the target video frames are greater than two, comparing differences between any two adjacent target video frames, and determining the two adjacent target video frames with the largest differences as the first key frame and the second key frame includes:
for any two adjacent first target video frames and second target video frames, calculating a first difference value between a first pixel value of a first pixel point and a second pixel value of a second pixel point; the first pixel point is any pixel point on the first target video frame, and the second pixel point is a pixel point on the second target video frame, which is located at a position corresponding to the first pixel point;
calculating an absolute value of each first difference value;
calculating an average value of the absolute values according to each absolute value, and taking the average value as a difference value between the first target video frame and the second target video frame;
And determining the two adjacent target video frames with the largest difference value as the first key frame and the second key frame according to the difference value between any two adjacent target video frames.
Further, the step of acquiring the difference image according to the first key frame and the second key frame includes:
calculating a second difference between the fourth pixel value of the fourth pixel point and the third pixel value of the third pixel point; the fourth pixel point is any pixel point on the second key frame, and the third pixel point is a pixel point on the first key frame, which is located at a position corresponding to the fourth pixel point;
and determining the absolute value of each second difference value as a fifth pixel value of a pixel point at a corresponding position on the difference image to obtain the difference image.
Further, the method comprises the steps of,
the step of inputting the output original tensor into a detection model and outputting the detection result comprises the following steps:
in the detection model: inputting the input original tensor into a convolutional neural network in the detection model to obtain a first tensor; wherein the input raw tensor comprises the first color channel and the second color channel;
Acquiring a first weight and a second weight according to the first tensor;
correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor;
and determining the detection result according to the target tensor.
Further, the step of obtaining the first weight and the second weight according to the first tensor includes:
global pooling is carried out on the first tensor to obtain a second tensor, and the length and the width of the second tensor are 1; wherein the second tensor is the same as the number of channels contained in the first tensor;
inputting the second tensor into an activation function, and outputting the first weight corresponding to each channel in the first tensor;
for each channel in the first tensor, performing product operation on the first tensor and the first weight corresponding to the channel to obtain a third tensor;
global average pooling is carried out on the third tensor, and a fourth tensor with the number of channels being 1 is obtained; inputting the fourth tensor into a convolutional neural network of the detection model to obtain a second weight corresponding to each position on the fourth tensor; wherein the convolutional neural network comprises an activation function.
Further, the step of correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor includes:
performing point multiplication operation on the third tensor and the second weight corresponding to each position aiming at the second weight corresponding to each position of the third tensor to obtain a fifth tensor;
and taking the obtained fifth tensor as a new first tensor, continuing to execute the step to carry out global pooling on the first tensor to obtain a second tensor, and taking the fifth tensor generated in the last time as the corrected target tensor until the preset times are met.
Further, the first color channel is a composite channel; the second color channel is the composite channel or the gray scale channel; the composite channels are a red channel, a green channel and a blue channel.
Further, after the step of outputting the detection result, the method further includes the steps of:
screening out a target confidence coefficient with the confidence coefficient larger than a first threshold according to the corresponding confidence coefficient of each liquid leakage region, and taking the liquid leakage region corresponding to the target confidence coefficient as a candidate liquid leakage region;
And according to the calculated cross-over ratio between any two candidate liquid leakage areas, when the cross-over ratio is larger than a second threshold value, merging the candidate liquid leakage areas corresponding to the cross-over ratio to obtain a target liquid leakage area.
Further, after the target liquid leakage area is obtained, the method further comprises the steps of:
and sending the position information of the target liquid leakage area to a monitoring terminal for liquid leakage early warning.
The present invention also provides a liquid leakage detecting device including:
the acquisition module is used for comparing the difference between any two target video frames in the target video data according to the acquired target video data to acquire a difference image;
the determining module is used for determining an input original tensor according to a first color channel corresponding to a first frame image and a second color channel corresponding to the difference image in the target video data;
the input/output module is used for inputting the input original tensor into a detection model and outputting the detection result; the detection model is built in a model building module according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
The present invention also provides a machine-readable storage medium having stored thereon machine-readable instructions executable by a processor, which when executed by the processor, implement the liquid leakage detection method of the present invention as described above.
The present invention also provides a liquid leakage detection system comprising:
the video acquisition device is used for acquiring target video data and transmitting the target video data to the liquid leakage detection device;
the liquid leakage detection device is used for receiving the target video data and realizing the liquid leakage detection method;
and the monitoring terminal is used for receiving the position information of the liquid leakage area and carrying out early warning.
The invention also provides an electronic device, comprising: the liquid leakage detection device comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the machine-readable instructions are executed by the processor to realize the steps of the liquid leakage detection method.
The invention provides a liquid leakage detection method, a device, a system and a machine-readable storage medium, wherein the liquid leakage detection method is characterized in that a first key frame and a second key frame with the largest difference are determined from target video data, an input original tensor is determined according to a first frame image in the target video data and a difference image formed by the difference between the first key frame and the second key frame, and compared with the method for determining the tensor corresponding to each frame image according to each frame image in the video frames in the prior art, the liquid leakage detection method enhances the difference between a liquid area and a background in the input original tensor, and further improves the significance characteristics of the liquid area.
Further, in the method, the input original tensor is input into the convolutional neural network to obtain a first tensor, then the first tensor is used for obtaining a first weight corresponding to each channel and a second weight corresponding to each position, the first tensor is corrected according to the first weight and the second weight, a corrected target tensor is obtained, and a detection result is determined according to the target tensor. Compared with the method for extracting the liquid area in each frame of image by directly using the universal detection network model in the prior art, the method for extracting the liquid area in each frame of image by using the universal detection network model has the advantages that the first weight related to the channel and the second weight related to the position are obtained through the first tensor and the detection model, the first tensor is corrected through the first weight and the second weight to obtain the target tensor, the corrected target tensor is detected, the liquid area in the video is accurately detected, and the accuracy of the detection result is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a flow chart of a liquid leak detection method provided in accordance with an embodiment of the present application;
FIG. 2 is a schematic diagram of first pixel points on a first target video frame according to an embodiment of the invention;
FIG. 3 is a schematic diagram of each second pixel point on a second target video frame according to an embodiment of the present application;
FIG. 4 is a schematic diagram of each third pixel point on the first keyframe according to an embodiment of the present application;
FIG. 5 is a schematic diagram of fourth pixels on a second keyframe according to an embodiment of the present application;
FIG. 6 is a schematic diagram of each fifth pixel point on a difference image according to an embodiment of the present application;
FIG. 7 shows a schematic structural diagram of a first unit in a detection model according to an embodiment of the present application;
FIG. 8 is a schematic view showing a structure of a liquid leakage detecting device according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In view of the problem of poor recognition effect in detecting a liquid leakage area in the prior art, embodiments of the present application provide a detection method, a device, an electronic apparatus, and a computer readable storage medium, so as to improve accuracy of a detection result, which are described below by way of embodiments.
The liquid leakage detection method of the invention comprises the following steps:
acquiring target video data, and comparing differences between any two target video frames in the target video data to acquire a difference image;
Determining an input original tensor according to a first color channel corresponding to a first frame image in the target video data and a second color channel corresponding to the difference image;
inputting the input original tensor into a detection model, and outputting the detection result; the detection model is built according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
According to the liquid detection method, the first key frame and the second key frame with the largest difference are determined from the target video data, the input original tensor is determined according to the first frame image in the target video data and the difference image formed by the difference between the first key frame and the second key frame. The detection model is built based on the historical video data, different historical frame images are obtained through a large amount of historical video data, and the detection model is trained in a large amount, so that the generalization capability of the model is greatly improved, and the detection effect on liquid leakage can be greatly improved when the detection model is applied to liquid leakage detection.
Embodiment one:
for the convenience of understanding the present embodiment, a liquid leakage detection method disclosed in the embodiment of the present application will be described in detail first. Fig. 1 shows a flow chart of a liquid leakage detection method according to an embodiment of the present application, as shown in fig. 1, including the following steps:
s101: according to the obtained target video data, determining the difference between any two target video frames in the target video data, and taking the two target video frames with the largest difference as a first key frame and a second key frame.
In the embodiment of the present application, the target video data may be video data of an object to be detected recorded in real time by the image pickup apparatus, that is, the target video data is data generated in real time. The object to be detected may be a liquid transporting apparatus or a liquid storing apparatus, and in particular may be an apparatus facility for storing or transporting such as a pipe filled with a liquid, a storage tank, or the like. The liquid may be crude oil, water, etc. The target video frame may be any frame of image in the target video data.
In the embodiment of the present application, since the relative position of the image capturing apparatus and the object to be detected is not changed, and the color, the form, and the like of the object to be detected itself are not changed, the difference to be detected in the present application generally refers to the difference generated due to the region of the leaked liquid. The first key frame and the second key frame have a certain time interval therebetween. For example, the first keyframe may be an image of a liquid that has not leaked, and the second keyframe may be an image of a liquid that has leaked, where the difference is referred to as the leaked liquid.
S102: determining an input original tensor according to a first color channel corresponding to a first frame image and a second color channel corresponding to a difference image in target video data; wherein the difference image is an image formed by a difference between the first key frame and the second key frame; the first color channel is a composite channel; the second color channel is a composite channel or a gray scale channel; the composite channels are a red channel, a green channel and a blue channel.
In the embodiment of the present application, the first frame image refers to the first frame image in the target video data. In the present application, each frame of image in the target video data has an RGB mode, that is, the format of the first frame of image is also an RGB mode, so that the first frame of image includes three first color channels, which are respectively a red color channel, a green color channel and a blue color channel.
The difference image is an image formed by the difference between a first key frame and a second key frame, wherein the first key frame and the second key frame are both images in the target video data, and therefore, the first key frame and the second key frame also comprise three color channels.
When a difference image is generated according to a first key frame and a second key frame, a first method is to generate a difference image directly through the first key frame and the second key frame which contain three color channels, and the generated difference image contains three second color channels; the second method is that the first key frame and the second key frame are firstly gray-scaled, and then a difference map is generated according to the gray-scaled first key frame and second key frame, and the generated difference map only comprises one gray channel.
When the input original tensor is determined according to the first color channel corresponding to the first frame image and the second color channel corresponding to the difference image in the target video data, specifically, when the difference image contains composite channels (i.e., three second color channels: red channel, green channel and blue channel), the three first color channels and the three second color channels are overlapped (i.e., combined and synthesized) to generate the target image, where the target image includes 6 color channels, and then the target image is converted into the input original tensor, i.e., the input original tensor includes 6 color channels.
Or when the difference image contains one gray scale channel, three first color channels and one gray scale channel are overlapped to generate a target image, wherein the target image contains 4 color channels, and then the target image is converted into an input original tensor. Note that tensors (including input original tensors, etc.) referred to in this application refer to high-dimensional vectors.
S103: inputting the input original tensor into a detection model, and outputting a detection result; wherein, in the detection model: inputting the input original tensor into a convolutional neural network in the detection model to obtain a first tensor;
Acquiring a first weight and a second weight according to the first tensor; the color channels of the input original tensor comprise a first color channel and a second color channel; correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor; determining a detection result according to the target tensor; the detection result includes positional information of the liquid leakage areas and a confidence level corresponding to each liquid leakage area.
Specifically, the process of determining the detection result according to the target tensor is to re-input the obtained target tensor into the convolutional neural network of the detection model, and determine the detection result through classification and regression. In practice, the target video frame in the target video data may obtain a corresponding difference image according to an emergency (including a person/animal going in and out, a weather change, etc.), that is, the difference image may not have liquid leakage, and further confirmation through classification and regression processing is required, and only when liquid leakage occurs, a detection result related to the liquid leakage may be output.
In the embodiment of the present application, when the first tensor includes N channels, after the first weights corresponding to each channel in the first tensor are obtained, N first weights are obtained at this time.
In an embodiment of the present application, the first weight represents the importance of each channel in the first tensor, and the second weight represents the importance of each location of the third tensor relative to other locations.
When the first tensor is corrected according to the first weight and the second weight, the first tensor may be corrected by the first weight, and then the corrected first tensor is corrected again by the second weight, so as to obtain the target tensor.
In this application, the positional information of the liquid leakage area may be coordinates of liquid leakage, and the confidence is used to represent the probability that the detected liquid leakage area is a true leakage area.
In the method, the difference image formed by the first key frame and the second key frame with larger difference is input into the detection model, so that the detection model extracts effective characteristics of liquid in the time dimension.
In one possible implementation manner, when S101 determines, according to the acquired target video data, a difference between any two target video frames in the target video data, and the two target video frames with the largest difference are taken as the first key frame and the second key frame, the following steps may be specifically performed:
S1011: target video data within a target time period is acquired.
The target period of time may be 10 minutes, half an hour, etc., which is not limited in this application.
S1012: and selecting at least two target video frames from the target video data according to the preset time interval.
The preset time interval is not greater than the target time period. For example, when the target period is 10 minutes and the preset time interval is 1 minute, 11 target video frames are selected from the target video data.
S1013: when the target video frames are two, the two target video frames are determined to be a first key frame and a second key frame.
In a specific embodiment, when the preset time interval is equal to the target time period, the two selected target video frames are respectively the first frame image and the last frame image in the target video data. At this time, the first frame image and the last frame image in the target video data are directly determined as the first key frame and the second key frame.
S1014: when the target video frames are more than two, determining the two adjacent target video frames with the largest difference as a first key frame and a second key frame according to the difference between any two adjacent target video frames.
For example, when three target video frames are respectively a target video frame a, a target video frame B and a target video frame C, a difference a between the target video frame a and the target video frame B is determined, and when the difference a is smaller than the difference B, the target video frame B and the target video frame C are determined as a first key frame and a second key frame.
In another possible implementation, when determining the first key frame and the second key frame from the target video frame, the determination may also be performed by a frame difference method.
In one possible implementation manner, when S1014 is performed and the target video frames are greater than two, determining two adjacent target video frames with the largest difference as the first key frame and the second key frame according to the difference between any two adjacent target video frames may specifically be performed according to the following steps:
s10141: for any two adjacent first target video frames and second target video frames, calculating a first difference value between a first pixel value of a first pixel point and a second pixel value of a second pixel point; the first pixel point is any pixel point on the first target video frame, and the second pixel point is a pixel point on the second target video frame corresponding to the first pixel point.
When calculating the difference between the first target video frame and the second target video frame, the number of the first pixel points contained in the first target video frame is the same as the number of the second pixel points contained in the second target video frame, so that the first pixel value of each first pixel point on the first target video frame is subtracted from the second pixel value of the second pixel point at the corresponding position on the second target video frame, and the first difference value of each first pixel point and the second pixel point at the corresponding position is obtained. The number of the first pixel points, the number of the second pixel points and the number of the first difference values are the same.
Fig. 2 is a schematic diagram of each first pixel point on the first target video frame according to the embodiment of the present application, where, as shown in fig. 2, each box represents one first pixel point, and the numerical value in each box represents the first pixel value of the first pixel point. Wherein, the first pixel value of the first pixel point M1 is 152; the first pixel value of the first pixel point M2 is 240; the first pixel value of the first pixel point M3 is 56; the first pixel value of the first pixel point M4 is 88.
Fig. 3 is a schematic diagram of each second pixel point on the second target video frame according to the embodiment of the present application, as shown in fig. 3, each box represents one second pixel point, and the numerical value in each box represents the second pixel value of the second pixel point. Wherein the second pixel value of the second pixel point N1 is 254; the second pixel value of the second pixel point N2 is 38; the second pixel value of the second pixel point N3 is 56; the second pixel value of the second pixel point N4 is 66. The first pixel point M1 and the second pixel point N1 are located at corresponding positions; the first pixel point M2 and the second pixel point N2 are located at corresponding positions.
In calculating the first difference C1, it may specifically be 152 (the first pixel value of the first pixel point M1) -254 (the second pixel value of the second pixel point N1) = -102 (the first difference C1). Similarly, when calculating the first difference C2, it may specifically be 240 (the first pixel value of the first pixel point M2) -38 (the second pixel value of the second pixel point N2) =202 (the first difference C2). When calculating the first difference C3, it may specifically be 56 (the first pixel value of the first pixel point M3) -56 (the second pixel value of the second pixel point N3) =0 (the first difference C3). When calculating the first difference C4, it may specifically be 88 (the first pixel value of the first pixel point M4) -66 (the second pixel value of the second pixel point N4) =22 (the first difference C4).
S10142: for each first difference, an absolute value of the first difference is calculated.
Accepting the embodiment in step S10141, wherein the absolute value of the first difference C1 (i.e., -102) is 102; the absolute value of the first difference C2 (i.e. 202) is 202; the absolute value of the first difference C3 (i.e., 0) is 0; the absolute value of the first difference C4 (i.e. 22) is 22.
S10143: and calculating an average value of the absolute values according to each absolute value, and taking the average value as a difference value between the first target video frame and the second target video frame.
In the case of calculating the average value of the absolute values in the embodiment in step S10142, specifically, the average value= (101+202+0+22)/4=81.25. At this time, the difference value between the first target video frame and the second target video frame is 81.25.
S10144: and determining two adjacent target video frames with the largest difference value as a first key frame and a second key frame according to the difference value between any two adjacent target video frames.
And determining two adjacent target video frames with the largest difference value as a first key frame and a second key frame. Here, adjacent in this application refers to two adjacent target video frames after the target video frames are ordered according to the time sequence.
In one possible implementation, the time point corresponding to the first key frame is earlier than the time point corresponding to the second key frame; before executing step S102, determining the input original tensor according to the first color channel corresponding to the first frame image and the second color channel corresponding to the difference image in the target video data, the specific steps may be further executed as follows:
s1021: calculating a second difference between the fourth pixel value of the fourth pixel point and the third pixel value of the third pixel point; the fourth pixel point is any one pixel point on the second key frame, and the third pixel point is a pixel point on the first key frame, which is located at a position corresponding to the fourth pixel point.
Fig. 4 is a schematic diagram illustrating each third pixel point on the first keyframe provided by the embodiment of the present application, and fig. 5 is a schematic diagram illustrating each fourth pixel point on the second keyframe provided by the embodiment of the present application, as shown in fig. 4 and fig. 5, in calculating the second difference value, specifically, subtracting the fourth pixel value of each fourth pixel point on the second keyframe from the third pixel value of the third pixel point at the corresponding position on the first keyframe, to obtain the second difference value of each fourth pixel point and the third pixel point at the corresponding position. The number of the third pixel points, the number of the fourth pixel points and the number of the second difference values are the same.
In calculating the second difference, in particular 86-65=21; 95-95 = 0;156-156 = 0; 226-83=143; 62-26 = 36; 6-16= -10; 33-44= -11;96-89 = 7; 17-17=0. For a specific calculation principle, reference may be made to the embodiment when the first difference is calculated in step S10141.
S1022: and determining the absolute value of each second difference value as a fifth pixel value of the pixel point at the corresponding position on the difference image to obtain the difference image.
Fig. 6 shows a schematic diagram of each fifth pixel point on the difference image provided in the embodiment of the present application, as shown in fig. 6, in the present application, for each second difference value, an absolute value of the second difference value is calculated, and then the absolute value of the second difference value is determined as a fifth pixel value of the pixel point at a corresponding position on the difference image, so as to obtain the difference image.
In the application, by the method of making the difference between the pixel values of the pixel points at the corresponding positions of the first key frame and the second key frame, the same background area in the first key frame and the second key frame is removed, and only different areas between the two key frames, namely, the areas of difference between the two key frames, remain.
In a possible implementation manner, when step S103 is performed to obtain the first weight and the second weight according to the first tensor, the following steps may be specifically performed:
s1031: global pooling is carried out on the first tensor to obtain a second tensor; wherein the number of channels contained in the first tensor is the same as the number of channels contained in the second tensor;
in the embodiment of the present application, the detection model includes X units, where the first unit is a first unit in the detection model, fig. 7 shows a schematic structural diagram of the first unit in the detection model provided in the embodiment of the present application, and as shown in fig. 7, the first unit includes a channel layer attention mechanism module, a first correction module, a non-local spatial attention mechanism module, and a second correction module.
In the application, a first tensor is input to a channel layer attention mechanism module through an input module, global pooling is performed on the first tensor in the channel layer attention mechanism module to obtain a second tensor, and an image corresponding to the first tensor is an image of H (image length) ×w (image width) ×t (channel number), and after global pooling is performed, an image corresponding to the second tensor is an image of 1 (image length) ×1 (image width) ×t (channel number).
S1032: and inputting the second tensor into the activation function, and outputting a first weight corresponding to each channel in the first tensor.
In the channel layer attention mechanism module, the second tensor is input into an activation function (Sigmoid function) to obtain a first weight corresponding to each channel. In the application, the channel layer attention mechanism module outputs a first weight corresponding to each channel. And obtaining a first weight corresponding to each channel in the first tensor.
S1033: and carrying out product operation on the first tensor and the first weight corresponding to each channel in the first tensor to obtain a third tensor.
The first correction module receives the first tensor output by the input module and the first weights corresponding to the channels output by the channel layer attention mechanism module, and in the first correction module, for each channel in the first tensor, the product operation is carried out on the first tensor and the first weights corresponding to the channel, so that feature correction is carried out on each channel in the first tensor, and a third tensor is obtained, wherein an image corresponding to the third tensor is a graph of H (image length) multiplied by W (image width) multiplied by T (channel number).
S1034: carrying out global average pooling on the third tensor to obtain a fourth tensor; the number of channels in the fourth tensor is 1.
And inputting the third tensor output by the first correction module into a non-local spatial attention mechanism module, wherein in the non-local spatial attention mechanism module, global average pooling is carried out on the third tensor, T channels contained in the third tensor are normalized, so as to obtain a fourth tensor, and an image corresponding to the fourth tensor is a graph of H (image length) multiplied by W (image width) multiplied by 1 (channel number).
S1035: inputting the fourth tensor into the convolutional neural network to obtain a second weight corresponding to each position (on the image) corresponding to the third tensor; wherein the convolutional neural network comprises an activation function.
And in the non-local spatial attention mechanism module, inputting the fourth tensor into the convolutional neural network to obtain a second weight corresponding to each position on the third tensor. The convolutional neural network comprises a sigmoid activation function. The non-local spatial attention mechanism module outputs a second weight corresponding to each position on the third tensor.
In one possible implementation manner, when performing step S103 to correct the first tensor according to the first weight and the second weight to obtain the corrected target tensor, the following steps may be specifically performed:
s1036: and performing dot multiplication operation on the third tensor and the second weight corresponding to each position aiming at the second weight corresponding to the third tensor to each position to obtain a fifth tensor.
As shown in fig. 7, the second correction module receives the second weights corresponding to the positions on the target image output by the non-local spatial attention mechanism module and the third tensor output by the first correction module, and in the second correction module, the dot multiplication operation is performed on the third tensor and the second weights corresponding to the positions on the target image for the second weights corresponding to the positions, so as to obtain a fifth tensor.
S1037: and taking the obtained fifth tensor as a new first tensor, continuing to execute the step to carry out global pooling on the first tensor to obtain a second tensor, and taking the fifth tensor generated in the last time as a corrected target tensor until the preset times are met.
Since the detection model in the application includes X units, where the fifth tensor output by the second correction module of the first unit is taken as the output of the first unit, and is output by the output module of the first unit, and the output of the first unit is taken as the input of the next unit in the detection model, that is, the fifth tensor output by the second correction module in the first unit is taken as the new first tensor, and is input into the input module of the next unit, the step S1031 is repeatedly performed to global pool the first tensor, so as to obtain the second tensor, until the fifth tensor output by the X unit is taken as the corrected target tensor.
In one possible implementation manner, when the step S103 is performed to determine the detection result according to the target tensor, the following steps may be specifically performed: the detection model also comprises a full-connection layer, the target tensor is input into the full-connection layer, classification is carried out through the full-connection layer, and the position information of the liquid leakage areas and the confidence degree corresponding to each liquid leakage area are output.
In the application, the channel layer attention mechanism module is used for extracting the channel correlation weight, and each channel represents a detector, so that the channel layer attention mechanism module can judge which channel characteristics have positive effect on the network learning of the detection model.
The non-local spatial attention mechanism module is used for compensating the defect of the convolution local receptive field, and the importance of different positions relative to other positions in each characteristic spectrum is focused by the non-local spatial attention module, and weight extraction is carried out on the dimensions of H (image length) and W (image width). The specific scheme is that for a given image feature (referred to as a third tensor in the application), a T channel is normalized through global average pooling to obtain features of H (image length) ×W (image width) ×1, an importance parameter, namely a spatial feature weight spectrum, of each position in an image corresponding to the third tensor is obtained through convolution and sigmoid activation functions, and finally, dot multiplication operation is carried out on the obtained product and the third tensor, so that a final output (a fifth tensor or target tensor) is obtained.
In the application, the channel layer attention mechanism module and the non-local space attention mechanism module are combined in series, so that the weight information of the multidimensional characteristic spectrum can be effectively extracted, and a more interesting area, namely a liquid leakage area, is focused in the detection model learning.
In a specific embodiment of the detection method of the present application, the detection model uses a classical res net50 (Residual Network) as a backbone Network, uses RetinaNet as an overall framework of the detection model, adds a separable attention mechanism to the Stage2 of the backbone Network res net50 structure to perform weight distribution, and feeds the processed semantic and space-time feature map into a subsequent deep Network result to perform reasoning settlement to obtain a final result (i.e. a detection result described in the present application). It should be understood that one skilled in the art may choose different network models according to the actual situation.
In a possible implementation manner, after the step S103 is performed to input the first tensor into the detection model and output the detection result, the following steps may be specifically performed:
s1041: and screening out a target confidence coefficient with the confidence coefficient larger than a first threshold according to the corresponding confidence coefficient of each liquid leakage region, and taking the liquid leakage region corresponding to the target confidence coefficient as a candidate liquid leakage region.
In the present application, there may be a plurality of liquid leakage areas output by the detection model, and each liquid leakage area corresponds to one confidence level.
S1042: and according to the calculated intersection ratio between any two candidate liquid leakage areas, merging the candidate liquid leakage areas corresponding to the intersection ratio when the intersection ratio is larger than a second threshold value to obtain a target liquid leakage area.
In the present application, there may be a coincidence between the liquid leakage areas, and the intersection ratio refers to a coincidence between the two candidate liquid leakage areas. And when the intersection ratio between the two candidate liquid leakage areas is larger than a second threshold value, combining the two candidate liquid leakage areas to form a target liquid leakage area.
S1043: and sending the position information of the target liquid leakage area to a monitoring terminal for liquid leakage early warning.
The monitoring terminal is a handheld mobile terminal of a responsible person in general so as to realize early warning of related responsible persons. And sending the position information (namely the early warning information) of the target liquid leakage area to the terminal of the responsible person so that the responsible person can timely process the target liquid leakage area according to the received early warning information. The location information (i.e. early warning information) can be sent to the terminal of the responsible person in any one or more forms of images, characters and videos. The terminal may be a handset, a computer, or other devices of the responsible person, which is not limited in this application.
In one possible implementation manner, before the step S103 is performed to input the first tensor into the detection model and output the detection result, the method further includes a step of building a detection model, where the detection model is built through learning and training of historical video data, and specifically may be performed according to the following steps:
s1001: according to the acquired historical video data, determining the difference between any two historical video frames in the historical video data, and taking the two historical video frames with the largest difference as a first historical key frame and a second historical key frame;
s1002: determining a first historical tensor according to a third color channel corresponding to a first historical frame image and a fourth color channel corresponding to a historical difference image in the historical video data; wherein the historical difference image is an image formed by differences between the first historical key frame and the second historical key frame; the third color channel is a composite channel; the fourth color channel is a composite channel or a gray scale channel; the composite channel is a red channel, a green channel and a blue channel;
s1003: inputting the first historical tensor and the leakage correct position information into a detection model to be trained, and outputting a predicted detection result predicted by the detection model to be trained;
S1004: comparing the predicted detection result with the corresponding actual situation to obtain a loss value;
s1005: performing the round of training on the detection model to be trained by using the loss value;
s1006: when the training reaches the preset times, stopping training, and taking the training-completed detection model to be trained as a detection model.
Wherein the training samples (historical video data) provided for the trained detection model include a variety of different scene conditions: including both scenes in which a leak condition exists and scenes in which no leak condition exists. The model is trained through a large number of positive and negative samples, so that the generalization capability of the detection model is improved, namely the reliability of the output result of the model is improved.
Embodiment two:
based on the same technical concept, the embodiment of the present application further provides a liquid leakage detection device, and fig. 8 shows a schematic structural diagram of the detection device provided in the embodiment of the present application, as shown in fig. 8, where the device includes:
an obtaining module 801, configured to compare differences between any two target video frames in the target video data according to the obtained target video data, and obtain a difference image;
a determining module 802, configured to determine an input original tensor according to a first color channel corresponding to a first frame image in the target video data and a second color channel corresponding to the difference image;
The input-output module 803 is configured to input the input original tensor into a detection model, and output a detection result; the detection model is built in a model building module according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
Wherein the first color channel is a composite channel; the second color channel is the composite channel or the gray scale channel; the composite channel is a red channel, a green channel and a blue channel;
in the detection model: inputting the input original tensor into a convolutional neural network in the detection model to obtain a first tensor;
acquiring a first weight and a second weight according to the first tensor; the input original tensor comprises the first color channel and the second color channel; correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor; and determining the detection result according to the target tensor.
Optionally, when the obtaining module 801 determines, according to the obtained target video data, a difference between any two target video frames in the target video data, and uses two target video frames with the largest difference as a first key frame and a second key frame, the method is specifically used for:
Acquiring target video data in a target time period;
selecting at least two target video frames from the target video data according to a preset time interval;
when the target video frames are two, determining the two target video frames as the first key frame and the second key frame;
and when the target video frames are larger than two, determining the two adjacent target video frames with the largest difference as the first key frame and the second key frame according to the difference between any two adjacent target video frames.
Optionally, when the target video frames are greater than two, the obtaining module 801 is specifically configured to, when determining, according to the difference between any two adjacent target video frames, the two adjacent target video frames with the largest difference as the first key frame and the second key frame:
for any two adjacent first target video frames and second target video frames, calculating a first difference value between a first pixel value of a first pixel point and a second pixel value of a second pixel point; the first pixel point is any pixel point on the first target video frame, and the second pixel point is a pixel point on the second target video frame, which is located at a position corresponding to the first pixel point;
Calculating an absolute value of each first difference value;
calculating an average value of the absolute values according to each absolute value, and taking the average value as a difference value between the first target video frame and the second target video frame;
and determining the two adjacent target video frames with the largest difference value as the first key frame and the second key frame according to the difference value between any two adjacent target video frames.
Optionally, the time point corresponding to the first key frame is earlier than the time point corresponding to the second key frame;
the acquisition module is specifically used for:
before the determining module 802 determines an input original tensor according to a first color channel corresponding to a first frame image and a second color channel corresponding to a difference image in the target video data, calculating a second difference between a fourth pixel value of a fourth pixel point and a third pixel value of a third pixel point; the fourth pixel point is any pixel point on the second key frame, and the third pixel point is a pixel point on the first key frame, which is located at a position corresponding to the fourth pixel point;
And determining, for each of the second differences, an absolute value of the second difference as a fifth pixel value of a pixel point at a corresponding position on the difference image, so as to obtain the difference image.
Optionally, the input/output module 803 is specifically configured to, when acquiring the first weight and the second weight according to the first tensor:
global pooling is carried out on the first tensor to obtain a second tensor; wherein the second tensor is the same as the number of channels contained in the first tensor;
inputting the second tensor into the activation function, and outputting a first weight corresponding to each channel of the first tensor; obtaining a first weight corresponding to each channel in the first tensor;
for each channel in the first tensor, performing product operation on the first tensor and a first weight corresponding to the channel to obtain a third tensor;
carrying out global average pooling on the third tensor to obtain a fourth tensor; the number of channels in the fourth tensor is 1;
inputting the fourth tensor into the convolutional neural network to obtain a second weight corresponding to each position (on the image) corresponding to the third tensor; wherein the convolutional neural network layer comprises an activation function.
Optionally, when the input/output module 803 corrects the first tensor according to the first weight and the second weight to obtain a corrected target tensor, the method is specifically configured to:
After a third tensor is obtained through the first tensor and the first weight, performing dot multiplication operation on the third tensor and the second weight corresponding to each position aiming at the second weight corresponding to the third tensor, so as to obtain a fifth tensor;
and taking the obtained fifth tensor as a new first tensor, continuing to execute the step to carry out global pooling on the first tensor to obtain a second tensor, and taking the fifth tensor generated in the last time as the corrected target tensor until the preset times are met.
Optionally, the method further comprises:
the screening module is configured to screen, after the input/output module 803 inputs the input original tensor into a detection model and outputs a detection result, a target confidence coefficient with the confidence coefficient being greater than a first threshold according to the corresponding confidence coefficient of each liquid leakage region, and use the liquid leakage region corresponding to the target confidence coefficient as a candidate liquid leakage region;
the merging module is used for merging the candidate liquid leakage areas corresponding to the calculated cross-over ratio between any two candidate liquid leakage areas to obtain a target liquid leakage area when the cross-over ratio is larger than a second threshold value;
And the sending module is used for sending the position information of the target liquid leakage area to a monitoring terminal for early warning. The monitoring terminal is a handheld mobile terminal of a responsible person in general so as to realize the function of early warning of the relevant responsible person.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
Embodiment III:
based on the same technical concept, the embodiment of the present application further provides an electronic device, fig. 9 shows a schematic structural diagram of the electronic device provided in the embodiment of the present application, as shown in fig. 9, the electronic device 900 includes: the liquid leakage detection device comprises a processor 901, a memory 902 and a bus 903, wherein the memory stores machine readable instructions executable by the processor, and when the electronic device is running, the processor 901 communicates with the memory 902 through the bus 903, and the processor 901 executes the machine readable instructions to perform the steps of the liquid leakage detection method described in the first embodiment.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
Embodiment four:
based on the same technical idea, a machine-readable storage medium is further provided in the fourth embodiment of the present application, where machine-readable instructions executable by the processor are stored, and when the machine-readable instructions are executed by the processor, the method steps described in the first embodiment are implemented.
In particular, the machine-readable storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, and when machine-readable instructions contained in the machine-readable storage medium are executed (may be a personal computer, a server, or a computer device such as a network device), all or part of the steps of the liquid leakage detection method according to the first embodiment of the present invention are implemented.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
Fifth embodiment:
based on the same technical concept, embodiments of the present application further provide a liquid leakage detection system, including: the liquid leakage detection device comprises a video acquisition device, a liquid leakage detection device and a monitoring terminal, wherein the video acquisition device acquires target video data and transmits the target video data to the liquid leakage detection device; the liquid leakage detecting device receives the target video data and implements the liquid leakage detecting method described in the first embodiment; and the monitoring terminal receives the position information of the liquid leakage area and performs early warning.
The video acquisition device can transmit the acquired video data to the liquid leakage detection device in real time in a wired or wireless mode; it should be understood that in the application scenario of multiple monitoring points, the video acquisition device and the monitoring terminal can be correspondingly set for the monitoring points.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A liquid leak detection method, comprising the steps of:
according to the obtained target video data, comparing the difference between any two target video frames in the target video data to obtain a difference image;
determining an input original tensor according to a first color channel corresponding to a first frame image in the target video data and a second color channel corresponding to the difference image;
inputting the input original tensor into a detection model, and outputting the detection result; the detection model is built according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
2. The liquid leakage detecting method according to claim 1, wherein the step of comparing differences between any two target video frames in the target video data based on the acquired target video data, and acquiring a difference image includes:
acquiring target video data to be detected in a target time period;
according to a preset time interval, selecting at least two target video frames from the target video data as a first key frame and a second key frame, wherein the time point corresponding to the first key frame is earlier than the time point corresponding to the second key frame;
Acquiring the difference image according to the first key frame and the second key frame;
wherein when the target video frames are two, determining the two target video frames as the first key frame and the second key frame;
and when the target video frames are larger than two, comparing differences between any two adjacent target video frames, and determining the two adjacent target video frames with the largest differences as the first key frame and the second key frame.
3. The liquid leakage detection method according to claim 2, wherein the step of comparing differences between any two adjacent target video frames when the target video frames are greater than two, and determining two adjacent target video frames having the largest differences as the first key frame and the second key frame comprises:
for any two adjacent first target video frames and second target video frames, calculating a first difference value between a first pixel value of a first pixel point and a second pixel value of a second pixel point; the first pixel point is any pixel point on the first target video frame, and the second pixel point is a pixel point on the second target video frame, which is located at a position corresponding to the first pixel point;
Calculating an absolute value of each first difference value;
calculating an average value of the absolute values according to each absolute value, and taking the average value as a difference value between the first target video frame and the second target video frame;
and determining the two adjacent target video frames with the largest difference value as the first key frame and the second key frame according to the difference value between any two adjacent target video frames.
4. The liquid leakage detection method according to claim 2, wherein the step of acquiring the difference image from the first key frame and the second key frame comprises:
calculating a second difference between the fourth pixel value of the fourth pixel point and the third pixel value of the third pixel point; the fourth pixel point is any pixel point on the second key frame, and the third pixel point is a pixel point on the first key frame, which is located at a position corresponding to the fourth pixel point;
and determining the absolute value of each second difference value as a fifth pixel value of a pixel point at a corresponding position on the difference image to obtain the difference image.
5. The liquid leakage detection method according to claim 1, wherein,
the step of inputting the output original tensor into a detection model and outputting the detection result comprises the following steps:
in the detection model: inputting the input original tensor into a convolutional neural network in the detection model to obtain a first tensor; wherein the input raw tensor comprises the first color channel and the second color channel;
acquiring a first weight and a second weight according to the first tensor;
correcting the first tensor according to the first weight and the second weight to obtain a corrected target tensor;
and determining the detection result according to the target tensor.
6. The liquid leakage detection method according to claim 5, wherein the step of acquiring the first weight and the second weight from the first tensor comprises:
global pooling is carried out on the first tensor to obtain a second tensor, and the length and the width of the second tensor are 1; wherein the second tensor is the same as the number of channels contained in the first tensor;
inputting the second tensor into an activation function, and outputting the first weight corresponding to each channel in the first tensor;
For each channel in the first tensor, performing product operation on the first tensor and the first weight corresponding to the channel to obtain a third tensor;
global average pooling is carried out on the third tensor, and a fourth tensor with the number of channels being 1 is obtained; inputting the fourth tensor into a convolutional neural network of the detection model to obtain a second weight corresponding to each position on the fourth tensor; wherein the convolutional neural network comprises an activation function.
7. The liquid leakage detection method according to claim 6, wherein the step of correcting the first tensor according to the first weight and the second weight to obtain the corrected target tensor comprises:
performing point multiplication operation on the third tensor and the second weight corresponding to each position aiming at the second weight corresponding to each position of the third tensor to obtain a fifth tensor;
and taking the obtained fifth tensor as a new first tensor, continuing to execute the step to carry out global pooling on the first tensor to obtain a second tensor, and taking the fifth tensor generated in the last time as the corrected target tensor until the preset times are met.
8. The liquid leakage detection method according to claim 1, wherein the first color channel is a composite channel; the second color channel is the composite channel or the gray scale channel; the composite channels are a red channel, a green channel and a blue channel.
9. The liquid leakage detection method according to any one of claims 1 to 8, characterized by further comprising, after the step of outputting the detection result, the step of:
screening out a target confidence coefficient with the confidence coefficient larger than a first threshold according to the corresponding confidence coefficient of each liquid leakage region, and taking the liquid leakage region corresponding to the target confidence coefficient as a candidate liquid leakage region;
and according to the calculated cross-over ratio between any two candidate liquid leakage areas, when the cross-over ratio is larger than a second threshold value, merging the candidate liquid leakage areas corresponding to the cross-over ratio to obtain a target liquid leakage area.
10. The liquid leakage detection method according to claim 9, further comprising, after said obtaining the target liquid leakage area, the steps of:
and sending the position information of the target liquid leakage area to a monitoring terminal for liquid leakage early warning.
11. A liquid leak detection apparatus, comprising:
the acquisition module is used for acquiring target video data according to the acquired target video data; comparing differences between any two target video frames in the target video data to obtain a difference image;
the determining module is used for determining an input original tensor according to a first color channel corresponding to a first frame image and a second color channel corresponding to the difference image in the target video data;
the input/output module is used for inputting the input original tensor into a detection model and outputting the detection result; the detection model is built in a model building module according to historical video data, and the detection result comprises position information of liquid leakage areas and confidence degrees corresponding to each liquid leakage area.
12. A machine-readable storage medium having stored thereon machine-readable instructions executable by a processor, which when executed by the processor, implement the liquid leak detection method of any of claims 1 to 10.
13. A liquid leak detection system, comprising:
The video acquisition device is used for acquiring target video data and transmitting the target video data to the liquid leakage detection device;
liquid leakage detection means for receiving the target video data and implementing the liquid leakage detection method according to any one of claims 1 to 10;
and the monitoring terminal is used for receiving the position information of the liquid leakage area and carrying out early warning.
14. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the liquid leak detection method according to any one of claims 1 to 10.
CN202111682398.8A 2021-12-30 2021-12-30 Liquid leakage detection method, device, system and machine-readable storage medium Pending CN116434100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111682398.8A CN116434100A (en) 2021-12-30 2021-12-30 Liquid leakage detection method, device, system and machine-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111682398.8A CN116434100A (en) 2021-12-30 2021-12-30 Liquid leakage detection method, device, system and machine-readable storage medium

Publications (1)

Publication Number Publication Date
CN116434100A true CN116434100A (en) 2023-07-14

Family

ID=87084171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111682398.8A Pending CN116434100A (en) 2021-12-30 2021-12-30 Liquid leakage detection method, device, system and machine-readable storage medium

Country Status (1)

Country Link
CN (1) CN116434100A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117231524A (en) * 2023-11-14 2023-12-15 浙江嘉源和达水务有限公司 Pump cavitation state monitoring and diagnosing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117231524A (en) * 2023-11-14 2023-12-15 浙江嘉源和达水务有限公司 Pump cavitation state monitoring and diagnosing method and system
CN117231524B (en) * 2023-11-14 2024-01-26 浙江嘉源和达水务有限公司 Pump cavitation state monitoring and diagnosing method and system

Similar Documents

Publication Publication Date Title
CN109977921B (en) Method for detecting hidden danger of power transmission line
CN111709408B (en) Image authenticity detection method and device
CN112669316B (en) Power production abnormality monitoring method, device, computer equipment and storage medium
CN110852222A (en) Campus corridor scene intelligent monitoring method based on target detection
CN113052876A (en) Video relay tracking method and system based on deep learning
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
CN107730530A (en) A kind of remote emergency management control method based on smart city
CN116030074A (en) Identification method, re-identification method and related equipment for road diseases
CN116434100A (en) Liquid leakage detection method, device, system and machine-readable storage medium
CN115346278A (en) Image detection method, device, readable medium and electronic equipment
CN110942456A (en) Tampered image detection method, device, equipment and storage medium
CN116226435B (en) Cross-modal retrieval-based association matching method for remote sensing image and AIS information
CN110909578A (en) Low-resolution image recognition method and device and storage medium
CN117315537A (en) Video sensitive information detection method and system based on pre-training strategy
CN112580778A (en) Job worker mobile phone use detection method based on YOLOv5 and Pose-animation
CN116824352A (en) Water surface floater identification method based on semantic segmentation and image anomaly detection
CN111369557A (en) Image processing method, image processing device, computing equipment and storage medium
CN115457015A (en) Image no-reference quality evaluation method and device based on visual interactive perception double-flow network
CN113298102B (en) Training method and device for target classification model
CN114820755A (en) Depth map estimation method and system
CN114049319A (en) Text security type detection method and device, equipment, medium and product thereof
Ding et al. A novel deep learning framework for detecting seafarer’s unsafe behavior
CN112183359A (en) Violent content detection method, device and equipment in video
Chang et al. On the predictability in reversible steganography
He et al. Enhanced features in image manipulation detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination