CN115240020A - MaskRCNN water seepage detection method and system based on weak light compensation - Google Patents
MaskRCNN water seepage detection method and system based on weak light compensation Download PDFInfo
- Publication number
- CN115240020A CN115240020A CN202210464625.8A CN202210464625A CN115240020A CN 115240020 A CN115240020 A CN 115240020A CN 202210464625 A CN202210464625 A CN 202210464625A CN 115240020 A CN115240020 A CN 115240020A
- Authority
- CN
- China
- Prior art keywords
- image
- water seepage
- maskrcnn
- layer
- weak light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 86
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 230000002708 enhancing effect Effects 0.000 claims abstract description 10
- 239000003550 marker Substances 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 14
- 238000009825 accumulation Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 6
- 230000008595 infiltration Effects 0.000 claims description 5
- 238000001764 infiltration Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 238000009826 distribution Methods 0.000 claims description 2
- 238000010008 shearing Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
- G06T5/75—Unsharp masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E10/00—Energy generation through renewable energy sources
- Y02E10/20—Hydro energy
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a MaskRCNN water seepage detection method and a system based on weak light compensation, wherein the method comprises the following steps: s1, enhancing an expansion sample data set by adopting fusion sample data; s2, carrying out data annotation on the amplified data set by using Lableme to generate a marker file of a ponding area; s3, performing enhancement operation on the marked data set; s4, training a MaskRCNN model by using the enhanced data set; s5, importing the image to be detected into an MBLLEN model to obtain a final weak light enhanced water seepage area image; and S6, importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic map through convolution calculation of the water seepage image, obtaining a candidate region through RPN, and obtaining the final water seepage region position through an ROI (region of interest) layer. According to the invention, the inspection image to be detected is led into the MBLLEN model for weak light enhancement, and then the weak light enhanced image is led into the MaskRCNN model for ponding region detection, so that not only can effective target detection be carried out, but also the boundary of a target region can be accurately segmented.
Description
Technical Field
The invention relates to the technical field of inspection robot recognition algorithm optimization in indoor complex environment, in particular to a MaskRCNN water seepage detection method and system based on weak light compensation.
Background
In the running process of the hydraulic turbine set, frequent water leakage of the main shaft seal often occurs, which seriously affects the stable running of the set. For the water turbine layer with the cable distributed all over, accidents such as circuit short circuit and the like are more easily caused. When the water leakage is large, the potential danger of serious water accumulation of a water turbine layer exists. Meanwhile, equipment faults caused by water dripping and water leakage of equipment on a water turbine layer should be overhauled in time so as to maintain stable operation of production. Therefore, the routing inspection area of the water turbine layer equipment is regularly and comprehensively dripped and leaked, the overall water seepage condition is analyzed, and timely repair and maintenance work is carried out according to the water seepage form and the water seepage degree, so that the safety coefficient is effectively improved, and the economic loss and the potential safety hazard caused by water seepage are reduced. However, in the water turbine layer, due to poor light conditions, even in the case of light supplement, the border of the water seepage and leakage area is difficult to distinguish by the image shot by the inspection robot.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a MaskRCNN water seepage detection method and system based on weak light compensation.
The method of the invention is realized by adopting the following technical scheme: maskRCNN water seepage detection method based on weak light compensation comprises the following steps:
s1, carrying out water accumulation image shooting, collecting water accumulation and water seepage images on a network, and enhancing and expanding a sample data set by adopting fusion sample data;
s2, carrying out data annotation on the amplified data set by using Lableme to generate a marker file of a ponding area;
s3, performing enhancement operation on the marked data set, performing turning, scaling and color gamut changing operation on the picture, and restoring the picture to the original pixel size after the operation is completed;
s4, training a MaskRCNN model by using the enhanced data set;
s5, importing the image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through an enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
and S6, importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic diagram of the water seepage image through convolution calculation, obtaining a candidate region through RPN, and obtaining the final water seepage region position through an ROI Align layer.
The system of the invention is realized by adopting the following technical scheme: maskRCNN infiltration detecting system based on low light compensation includes:
a fusion sample data enhancement module: the system is used for enhancing and expanding a sample data set by adopting fusion sample data according to a shot water accumulation image and a water accumulation seepage image collected on the network;
a data labeling module: the Lableme is used for carrying out data annotation on the amplified data set to generate a marking file of the ponding area;
an enhancement operation module: the system is used for performing enhancement operation on the marked data set, performing turning, scaling and color gamut changing operation on the picture, and restoring the picture to the original pixel size after the operation is finished;
MaskRCNN model training module: training a MaskRCNN model by using the enhanced data set;
the weak light enhanced water seepage area image acquisition module: importing an image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through a feature map enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
infiltration regional position acquisition module: and (3) importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic diagram of the water seepage image through convolution calculation, obtaining a candidate region through RPN, and obtaining the final water seepage region position through an ROI Align layer.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the invention, the inspection image to be detected is led into the MBLLEN model for weak light enhancement, and then the weak light enhanced image is led into the MaskRCNN model for ponding region detection, so that not only can effective target detection be carried out, but also the boundary of a target region can be accurately segmented.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 (a) is a schematic of a water image capture of the present invention;
FIG. 2 (b) is a schematic diagram of an image of water seepage collected on a net;
FIG. 2 (c) is a schematic diagram of fusion sample data enhancement according to the present invention;
FIG. 3 is a schematic diagram of an image of a ponding region marked by Labelme software;
FIG. 4 is a schematic structural diagram of the MaskRCNN model;
FIG. 5 is a schematic diagram of the MBLLEN model structure;
FIG. 6 (a) is a low light image;
fig. 6 (b) is an enhanced image.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1, the MaskRCNN water seepage detection method based on weak light compensation in this embodiment includes the following steps:
s1, carrying out ponding image shooting, collecting similar ponding water seepage images on the network, and enhancing an expansion sample data set by adopting fusion sample data;
s2, carrying out data annotation on the amplified data set by using Lableme to generate a marker file of a ponding area;
s3, performing enhancement operation on the marked data set, performing operations such as turning, zooming, color gamut changing and the like on the picture, and restoring the picture to the size of an original pixel after the operations are completed;
s4, training a MaskRCNN model by using the enhanced data set;
s5, importing the image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through an enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
and S6, importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic diagram of the water seepage image through convolution calculation, obtaining a candidate region through RPN, and obtaining the final water seepage region position through an ROI Align layer.
As shown in fig. 2 (a), fig. 2 (b), and fig. 2 (c), in this embodiment, a specific process of enhancing the fusion sample data in step S1 is as follows:
randomly selecting two pictures from a training set, enhancing the pictures by adopting a data enhancement method comprising turning, increasing noise, shearing and the like, and fusing two ponding images together according to random weight so as to increase the diversity of the sample; specifically, the formula for fusing two images is as follows:
Image(R,G,B)=η×Image1(R,G,B)+(1-η)Image2(R,G,B)
η=rand(0.3-0.7) (1)
where Image (R, G, B) is a fused Image, image1 (R, G, B) and Image2 (R, G, B) are the original two images, η = rand (0.3-0.7) represents a random number with a fusion weight of 0.3-0.7, and R, G, B are the three channels of the Image.
As shown in fig. 3, in this embodiment, the specific process of data annotation in step S2 is as follows:
marking the water seepage area in a multi-line segment and multi-point mode; labeling the polygonal outline of the water seepage area by using a Lableme labeling tool, setting the label name of the water seepage outline, generating a json file corresponding to the labeled outline by using the labeled sample, and storing the outline and the image information of the target area in the sample by using the json file.
As shown in fig. 4, in this embodiment, maskRCNN is an example segmentation framework, and is used to perform effective target detection and accurately segment the boundary of the target region. The MaskRCNN model is mainly composed of a feature extraction framework ResNet and an RPN module; the ResNet utilizes a multilayer convolution structure to extract the characteristics of the image to be detected, and the RPN is used for generating a plurality of ROI areas; maskRCNN adopts a RoI Align layer to replace RoI Pooling, and adopts bilinear interpolation to map a plurality of ROI characteristic regions generated by RPN to a unified 7 x 7 size; and finally, classifying the plurality of ROI (region of interest) areas generated by the RPN layer and performing regression operation of a positioning frame, and generating a Mask corresponding to the water seepage area by adopting a full convolution neural network (FCN).
In this embodiment, the Loss function Loss of MaskRCNN is defined as:
Loss=L cls +L box +L mask (2)
wherein L is cls To classify errors, L box Errors made for positioning the frames, L mask Errors caused for Mask;
construction of classification errors L by introducing Log-likelihood Loss (Log-likelihood Loss) cls ,L cls The calculation formula of (a) is as follows:
wherein X and Y are respectively test classification and real classification, N is input sample size, M is possible class number, and p ij Represents a sample x i The model of (d) predicts a probability distribution output as class j; y is ij Representing a sample x i Whether the true category of (d) is category j; to increase the robustness of the loss function, the error L generated by the positioning box box Loss with L1; the pixels in the ROI adopt sigmoid function to solve relative entropy to obtain average relative entropy error L mask 。
To achieve better generalization performance for a small number of labeled datasets at MaskRCNN, this example introduces fine tuning on weights pre-trained on COCO datasets (mask _ rcnn _ coco.h 5). Classifying by using MBLLEN and MaskRCNN ponding region detection models; and guiding the inspection image to be detected into an MBLLEN model for weak light enhancement, guiding the weak light enhanced image into a MaskRCNN model for ponding region detection, and outputting the marked water seepage region.
As shown in fig. 5, in this embodiment, the MBLLEN model is a multi-layer characteristic low-light-level enhancement deep learning network model, and extracts image characteristics of different layers through convolution calculation, and inputs feature maps of different layers into a plurality of sub-networks for enhancement. The MBLLEN model mainly includes a Feature Extraction Module (FEM), an Enhancement Module (EM), and a Fusion Module (FM). The FEM is composed of a unidirectional 10-layer network structure, 32 3 multiplied by 3 convolution kernels, the convolution step length is 1, a ReLU activation function, and the FEM does not adopt a pooling layer; the output of each layer is simultaneously the input of the next feature extraction module FEM convolutional layer and the input of the corresponding convolutional layer of the enhancement module EM. Since the feature extraction module FEM contains 10 feature extraction layers, the enhancement module EM contains 10 structurally identical sub-network structures. The EM layer sub-network structures are all 1 convolutional layer, 3 convolutional layer, and 3 anti-convolutional layer. And the fusion module FM fuses all images output from the EM subnet, and a final enhancement result is obtained by using 3-channel 1 multiplied by 1 convolution kernel convolution.
In order to train the MBLLEN model so that it can compensate for image weak light, structure loss (Str), pre-trained VGG content loss (VGG), and Region loss (Region loss) are defined, respectively.
Specifically, the formula for the loss function is as follows:
Loss=L Str +L VGG/i,j +L Region (4)
the structural loss is mainly used for reducing structural distortion and distortion of an enhanced image and a real image, and a specific formula is as follows:
L Str =L SSIM +L MS-SSIM (5)
wherein L is SSIM To enhance the structural similarity of the image and the real image, L MS-SSIM The degree of similarity of a multi-level structure;
and (3) pre-training VGG content loss, minimizing the absolute difference value of the enhanced image and the real image output in the pre-training VGG-19 network, wherein the loss function formula is as follows:
where E and G are the enhanced image and the real image, respectively, W i,j 、H i,j 、C i,j Dimensions of a feature map of the pre-trained VGG are respectively represented; phi i,j Representing the jth convolutional layer of the VGG-19 network, the ith characteristic diagram; x, y and z respectively represent the width, height and channel number of the characteristic diagram;
area loss, which is an approximate estimate of the entire image dark area by segmenting the image by 40% of the darkest pixel values, yields the following loss function:
wherein, E L And G L The low-light areas of the enhanced image and the real image, respectively, E H And G H Non-low-light areas, w, of the enhanced image and the real image, respectively L And w H Are 4 and 1, respectively; m is a unit of L Is G L The width of the image; n is a radical of an alkyl radical L Is G L The height of the image; m is H Is G H The width of the image; n is H Is G H The height of the image.
A low-light illumination data set is obtained by synthesis on the basis of a PASCAL VOC data set, gamma correction and Poisson noise with a Peak value of 200 are respectively added to serve as a low-light input image, and an original image serves as a real image. The results of the enhanced image experiment of the low light penetration image are shown in fig. 6 (a) and 6 (b).
Based on the same inventive concept, the invention provides a MaskRCNN water seepage detection system based on weak light compensation, which comprises:
a fusion sample data enhancement module: the system is used for enhancing and expanding a sample data set by adopting fusion sample data according to a shot water accumulation image and a water accumulation seepage image collected on the network;
a data labeling module: the Lableme is used for carrying out data annotation on the amplified data set to generate a marking file of the ponding area;
an enhancement operation module: the system is used for performing enhancement operation on the marked data set, performing turning, scaling and color gamut changing operation on the picture, and restoring the picture to the original pixel size after the operation is finished;
MaskRCNN model training module: training a MaskRCNN model by using the enhanced data set;
the weak light enhanced water seepage area image acquisition module: importing an image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through an enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
infiltration regional position acquisition module: and (3) importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic diagram of the water seepage image through convolution calculation, obtaining a candidate region through RPN, and obtaining the final water seepage region position through an ROI (region of interest) layer.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (9)
1. MaskRCNN water seepage detection method based on weak light compensation is characterized by comprising the following steps:
s1, carrying out water accumulation image shooting, collecting water accumulation and water seepage images on a network, and enhancing and expanding a sample data set by adopting fusion sample data;
s2, carrying out data annotation on the amplified data set by using Lableme to generate a marker file of a ponding area;
s3, performing enhancement operation on the marked data set, performing turning, scaling and color gamut changing operation on the picture, and restoring the picture to the original pixel size after the operation is completed;
s4, training a MaskRCNN model by using the enhanced data set;
s5, importing the image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through an enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
and S6, importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic diagram through convolution calculation of the water seepage image, obtaining a candidate region through an RPN (resilient packet network), and obtaining the final water seepage region position through a ROIAlign layer.
2. The MaskRCNN water seepage detection method based on the weak light compensation, according to claim 1, is characterized in that the specific process of fusing sample data enhancement in the step S1 is as follows:
randomly selecting two pictures from a training set, enhancing the pictures by adopting a data enhancement method comprising turning, increasing noise and shearing, and fusing two ponding images together according to random weight to increase the diversity of samples; the formula for fusing the two images is as follows:
Image(R,G,B)=η×Image1(R,G,B)+(1-η)Image2(R,G,B)
η=rand(0.3-0.7) (1)
where Image (R, G, B) is a fused Image, image1 (R, G, B) and Image2 (R, G, B) are the original two images, η = rand (0.3-0.7) represents a random number with a fusion weight of 0.3-0.7, and R, G, B are the three channels of the Image.
3. The MaskRCNN water seepage detection method based on weak light compensation according to claim 1, wherein the specific process of data labeling in step S2 is as follows:
marking the water seepage area in a multi-line segment and multi-point mode; and marking the polygonal outline of the water seepage area by using a Lableme marking tool, setting the label name of the water seepage outline, generating a json file corresponding to the marked sample, and storing the outline and the image information of the target area in the sample by using the json file.
4. The MaskRCNN water seepage detection method based on low-light compensation of claim 1, wherein the MaskRCNN model in step S4 is composed of a feature extraction framework ResNet and an RPN module; the ResNet utilizes a multilayer convolution structure to extract the characteristics of the image to be detected, and the RPN is used for generating a plurality of ROI areas; maskRCNN adopts a RoIAlign layer to replace RoIPooling, and adopts bilinear interpolation to map a plurality of ROI characteristic regions generated by RPN to a uniform 7 x 7 size; and finally, classifying a plurality of ROI areas generated by the RPN layer and performing regression operation of a positioning frame, and generating a Mask corresponding to the water seepage area by adopting a full convolution neural network (FCN).
5. The MaskRCNN water seepage detection method based on low-light compensation of claim 4, wherein the Loss function Loss of MaskRCNN is defined as:
Loss=L cls +L box +L mask (2)
wherein L is cls To classify errors, L box Errors made for positioning the frames, L mask Errors caused for Mask;
construction of classification errors L by introducing log-likelihood losses cls ,L cls The calculation formula of (a) is as follows:
wherein X and Y are respectively test classification and real classification, N is input sample size, M is possible class number, and p ij Representing a sample x i The model of (d) predicts the output as the probability distribution of class j; y is ij Representing a sample x i Whether the true category of (d) is category j;
error L generated by positioning frame box Loss with L1; the pixels in the ROI adopt sigmoid function to solve relative entropy to obtain average relative entropy error L mask 。
6. The MaskRCNN water seepage detection method based on low-light compensation, as claimed in claim 1, wherein the training of the MaskRCNN model in step S4 comprises introducing pre-trained weights on the COCO dataset to perform fine tuning.
7. The MaskRCNN water seepage detection method based on the weak light compensation, as claimed in claim 1, wherein the specific implementation process of the MBLLEN model in the step S5 is as follows:
s51, dividing the MBLLEN model into a feature extraction module FEM, an enhancement module EM and a fusion module FM;
s52, a feature extraction module FEM consists of a one-way 10-layer network structure, 32 convolution kernels with the convolution step length of 1, a ReLU activation function, and 3 multiplied by 3 convolution kernels; the output of each layer is simultaneously the input of the convolution layer of the next feature extraction module FEM and the input of the convolution layer corresponding to the enhancement module EM;
s53, the enhancement module EM comprises 10 sub-network structures with the same structure, wherein the sub-network structures are 1 convolution layer, 3 convolution layers and 3 deconvolution layers;
s54, a fusion module FM fuses all images output from the enhancement module EM subnet, and a final enhancement result is obtained by using 3-channel 1 x 1 convolution kernel convolution.
8. The MaskRCNN water seepage detection method based on weak light compensation according to claim 7, wherein the training process of the MBLLEN model is as follows:
defining structural loss, pre-trained VGG content loss and regional loss;
the formula for the loss function is as follows:
Loss=L Str +L VGG/i,j +L Region (4)
wherein, the structural loss is expressed by the following formula:
L Str =L SSIM +L MS-SSIM (5)
wherein L is SSIM To enhance the structural similarity of the image and the real image, L MS-SSIM The degree of similarity of a multi-level structure;
loss of pre-trained VGG content, the formula is as follows:
where E and G are the enhanced image and the real image, respectively, W i,j 、H i,j 、C i,j Dimensions of a feature map of the pre-trained VGG are respectively represented; phi i,j Representing the jth convolutional layer of the VGG-19 network, the ith characteristic diagram; x, y and z respectively represent the width, height and channel number of the characteristic diagram;
area loss, the dark light area of the whole image is obtained by dividing the image by 40% of the darkest pixel value, and the following loss function is obtained:
wherein E is L And G L The low-light areas of the enhanced image and the real image, respectively, E H And G H Non-low-light areas, w, of the enhanced image and of the real image, respectively L And w H Are 4 and 1, respectively; m is L Is G L The width of the image; n is L Is G L The height of the image; m is H Is G H The width of the image; n is a radical of an alkyl radical H Is G H The height of the image.
9. MaskRCNN infiltration detecting system based on weak light compensation, its characterized in that includes:
a fusion sample data enhancement module: the system is used for enhancing and expanding a sample data set by adopting fusion sample data according to a shot water accumulation image and a water accumulation seepage image collected on the network;
a data labeling module: the Lableme is used for carrying out data annotation on the amplified data set to generate a marking file of the ponding area;
an enhancement operation module: the image processing device is used for performing enhancement operation on the marked data set, performing turning, zooming and color gamut changing operation on the image, and restoring the image to the size of an original image pixel after the operation is finished;
MaskRCNN model training module: training a MaskRCNN model by using the enhanced data set;
the weak light enhanced water seepage area image acquisition module: importing an image to be detected into an MBLLEN model, obtaining feature maps of all levels through a feature extraction module FEM layer, obtaining a picture of each layer of feature map after weak light enhancement through a feature map enhancement module EM layer, inputting the enhanced feature maps into a fusion module FM layer, and obtaining a final weak light enhanced water seepage area image;
infiltration regional position acquisition module: and (3) importing the finally output weak light enhanced water seepage image into a MaskRCNN model, obtaining a characteristic map through convolution calculation of the water seepage image, obtaining a candidate region through RPN, and obtaining a final water seepage region position through a ROIAlign layer.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210464625.8A CN115240020A (en) | 2022-04-29 | 2022-04-29 | MaskRCNN water seepage detection method and system based on weak light compensation |
PCT/CN2022/134451 WO2023207064A1 (en) | 2022-04-29 | 2022-11-25 | Maskrcnn water seepage detection method and system based on weak light compensation |
LU505937A LU505937B1 (en) | 2022-04-29 | 2022-11-25 | MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210464625.8A CN115240020A (en) | 2022-04-29 | 2022-04-29 | MaskRCNN water seepage detection method and system based on weak light compensation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115240020A true CN115240020A (en) | 2022-10-25 |
Family
ID=83667997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210464625.8A Pending CN115240020A (en) | 2022-04-29 | 2022-04-29 | MaskRCNN water seepage detection method and system based on weak light compensation |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN115240020A (en) |
LU (1) | LU505937B1 (en) |
WO (1) | WO2023207064A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023207064A1 (en) * | 2022-04-29 | 2023-11-02 | 清远蓄能发电有限公司 | Maskrcnn water seepage detection method and system based on weak light compensation |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117315446B (en) * | 2023-11-29 | 2024-02-09 | 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) | Reservoir spillway abnormity intelligent identification method oriented to complex environment |
CN118015525B (en) * | 2024-04-07 | 2024-06-28 | 深圳市锐明像素科技有限公司 | Method, device, terminal and storage medium for identifying road ponding in image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675415A (en) * | 2019-12-05 | 2020-01-10 | 北京同方软件有限公司 | Road ponding area detection method based on deep learning enhanced example segmentation |
CN113469177A (en) * | 2021-06-30 | 2021-10-01 | 河海大学 | Drainage pipeline defect detection method and system based on deep learning |
CN114298145A (en) * | 2021-11-22 | 2022-04-08 | 三峡大学 | Permeable concrete pore intelligent identification and segmentation method based on deep learning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110770752A (en) * | 2018-09-04 | 2020-02-07 | 安徽中科智能感知产业技术研究院有限责任公司 | Automatic pest counting method combining multi-scale feature fusion network with positioning model |
CN115240020A (en) * | 2022-04-29 | 2022-10-25 | 清远蓄能发电有限公司 | MaskRCNN water seepage detection method and system based on weak light compensation |
-
2022
- 2022-04-29 CN CN202210464625.8A patent/CN115240020A/en active Pending
- 2022-11-25 WO PCT/CN2022/134451 patent/WO2023207064A1/en unknown
- 2022-11-25 LU LU505937A patent/LU505937B1/en active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675415A (en) * | 2019-12-05 | 2020-01-10 | 北京同方软件有限公司 | Road ponding area detection method based on deep learning enhanced example segmentation |
CN113469177A (en) * | 2021-06-30 | 2021-10-01 | 河海大学 | Drainage pipeline defect detection method and system based on deep learning |
CN114298145A (en) * | 2021-11-22 | 2022-04-08 | 三峡大学 | Permeable concrete pore intelligent identification and segmentation method based on deep learning |
Non-Patent Citations (3)
Title |
---|
CLEMINTS: "弱光增强论文解读--MBLLEN", 《HTTPS://BLOG.CSDN.NET/M0_37833142/ARTICLE/DETAILS/103888633》, pages 1 - 4 * |
双锴: "计算机视觉", 北京邮电大学出版社, pages: 130 - 133 * |
恋蛩音: "Fast(er) rcnn的损失函数总结", 《HTTPS://BLOG.CSDN.NET/QQ_17846375/ARTICLE/DETAILS/100687504》, pages 1 - 3 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023207064A1 (en) * | 2022-04-29 | 2023-11-02 | 清远蓄能发电有限公司 | Maskrcnn water seepage detection method and system based on weak light compensation |
Also Published As
Publication number | Publication date |
---|---|
WO2023207064A1 (en) | 2023-11-02 |
LU505937B1 (en) | 2024-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543754B (en) | Parallel method of target detection and semantic segmentation based on end-to-end deep learning | |
WO2022111219A1 (en) | Domain adaptation device operation and maintenance system and method | |
CN109685067B (en) | Image semantic segmentation method based on region and depth residual error network | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN110728200B (en) | Real-time pedestrian detection method and system based on deep learning | |
CN111640125B (en) | Aerial photography graph building detection and segmentation method and device based on Mask R-CNN | |
CN110175982B (en) | Defect detection method based on target detection | |
CN115240020A (en) | MaskRCNN water seepage detection method and system based on weak light compensation | |
CN113076816B (en) | Solar photovoltaic module hot spot identification method based on infrared and visible light images | |
CN111160205B (en) | Method for uniformly detecting multiple embedded types of targets in traffic scene end-to-end | |
CN113609896A (en) | Object-level remote sensing change detection method and system based on dual-correlation attention | |
CN110766020A (en) | System and method for detecting and identifying multi-language natural scene text | |
US20220366682A1 (en) | Computer-implemented arrangements for processing image having article of interest | |
CN111640116B (en) | Aerial photography graph building segmentation method and device based on deep convolutional residual error network | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN111768415A (en) | Image instance segmentation method without quantization pooling | |
CN111353396A (en) | Concrete crack segmentation method based on SCSEOCUnet | |
CN111401380A (en) | RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization | |
CN109657538B (en) | Scene segmentation method and system based on context information guidance | |
CN114943689A (en) | Method for detecting components of steel cold-rolling annealing furnace based on semi-supervised learning | |
CN112446376B (en) | Intelligent segmentation and compression method for industrial image | |
CN111832508B (en) | DIE _ GA-based low-illumination target detection method | |
CN116385465A (en) | Image segmentation model construction and image segmentation method, system, equipment and medium | |
Gupta et al. | Post disaster mapping with semantic change detection in satellite imagery | |
CN115497008A (en) | Method for identifying cultivated land area by using remote sensing vector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221025 |