LU505937B1 - MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION - Google Patents

MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION Download PDF

Info

Publication number
LU505937B1
LU505937B1 LU505937A LU505937A LU505937B1 LU 505937 B1 LU505937 B1 LU 505937B1 LU 505937 A LU505937 A LU 505937A LU 505937 A LU505937 A LU 505937A LU 505937 B1 LU505937 B1 LU 505937B1
Authority
LU
Luxembourg
Prior art keywords
image
maskrcnn
region
enhanced
water
Prior art date
Application number
LU505937A
Other languages
German (de)
Inventor
Jin Wang
Shaohua Zhao
Jiesheng Lin
Xiaomeng Xu
Jifan OuYang
Wanjun Zhou
Xin Liu
Xichang Cai
Zuliang Huang
zheng Weng
Yuquan Zhou
Wenyu Feng
Original Assignee
Csg Power Generation Guangdong Energy Storage Tech Co Ltd
Csges Operation Man Branch Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Csg Power Generation Guangdong Energy Storage Tech Co Ltd, Csges Operation Man Branch Company filed Critical Csg Power Generation Guangdong Energy Storage Tech Co Ltd
Application granted granted Critical
Publication of LU505937B1 publication Critical patent/LU505937B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • G06T5/75Unsharp masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/20Hydro energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed is a MaskRCNN water seepage detection method and system based on low-light compensation, which includes: S1, extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water accumulated region; S3, performing an enhancement operation on the labeled dataset; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model and obtaining a final image of a water seepage region enhanced by a low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final position of the water seepage region through an ROI Align layer. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.Disclosed is a MaskRCNN water seepage detection method and system based on low-light compensation, which includes: S1, extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water accumulated region; S3, performing an enhancement operation on the labeled dataset; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model and obtaining a final image of a water seepage region enhanced by a low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final position of the water seepage region through an ROI Align layer. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting of an object region.

Description

Ref: PCT/CN2022/134451Ref: PCT/CN2022/134451

MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEMMASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM

BASED ON LOW-LIGHT COMPENSATIONBASED ON LOW-LIGHT COMPENSATION

Field of the InventionField of Invention

The present disclosure relates to the technical field of identification algorithm optimization of inspection robots in complex indoor environments, and in particular, to a MaskRCNN water seepage detection method and system based on low-light compensation.The present disclosure relates to the technical field of identification algorithm optimization of inspection robots in complex indoor environments, and in particular, to a MaskRCNN water seepage detection method and system based on low-light compensation.

Background of the InventionBackground of the Invention

During operation, hydraulic turbine units often have frequent leakage of the spindle seal, which seriously affects the stable operation of the units. Accidents such as short circuits are more likely for hydraulic turbine layers where cables are distributed. When the leakage is large, there is a serious hidden danger of water accumulation in the hydraulic turbine layers. At the same time, the equipment failure caused by the dripping and leaking of equipment in the hydraulic turbine layers should be timely overhauled to maintain the stable operation of production. Therefore, it will effectively improve the safety factor and reduce the economic loss and potential safety hazards caused by water seepage and leakage by regular and comprehensive dripping and leaking inspection of the inspection region of the equipment in the hydraulic turbine layers, analysis of the overall water seepage situation, timely repair and maintenance according to the water seepage form and degree thereof. However, due to poor light conditions in the hydraulic turbine layers, the images taken by the inspection robots make it difficult to distinguish the boundaries of the water seepage and leakage regions even in the case of light supplements.During operation, hydraulic turbine units often have frequent leakage of the spindle seal, which seriously affects the stable operation of the units. Accidents such as short circuits are more likely for hydraulic turbine layers where cables are distributed. When the leakage is large, there is a serious hidden danger of water accumulation in the hydraulic turbine layers. At the same time, the equipment failure caused by the dripping and leaking of equipment in the hydraulic turbine layers should be timely overhauled to maintain the stable operation of production. Therefore, it will effectively improve the safety factor and reduce the economic loss and potential safety hazards caused by water seepage and leakage through regular and comprehensive dripping and leaking inspection of the inspection region of the equipment in the hydraulic turbine layers, analysis of the overall water seepage situation, timely repair and maintenance according to the water seepage form and degree thereof. However, due to poor light conditions in the hydraulic turbine layers, the images taken by the inspection robots make it difficult to distinguish the boundaries of the water seepage and leakage regions even in the case of light supplements.

Summary of the Invention 1Summary of the Invention 1

In order to solve the technical problems in the prior art, the present disclosure LU505937 provides a MaskRCNN water seepage detection method and system based on low-light compensation. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.In order to solve the technical problems in the prior art, the present disclosure LU505937 provides a MaskRCNN water seepage detection method and system based on low-light compensation. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting of an object region.

The present disclosure is achieved by the following technical solutions: aThe present disclosure is achieved by the following technical solutions: a

MaskRCNN water seepage detection method based on low-light compensation, including:MaskRCNN water seepage detection method based on low-light compensation, including:

S1, capturing an accumulated water image, collecting a seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement;S1, capturing an accumulated water image, collecting a seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement;

S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region;S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region;

S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations;S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations;

S4, training a MaskRCNN model using the enhanced dataset;S4, training a MaskRCNN model using the enhanced dataset;

S5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; andS5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; and

S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the 2 water seepage image, obtaining a region proposal through RPN, and obtaining a final LUS05937 water seepage region position through an ROI Align layer.S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the 2 water seepage image, obtaining a region proposal through RPN, and obtaining a final LUS05937 water seepage region position through an ROI Align layer.

The system of the present disclosure is achieved by the following technical solutions: a MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement moduleThe system of the present disclosure is achieved by the following technical solutions: a MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module

EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 3EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 3

The present disclosure has the following advantages and beneficial effects compared to the prior art.The present disclosure has the following advantages and beneficial effects compared to the prior art.

A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.

Brief Description of the DrawingsBrief Description of the Drawings

Fig. 1 is a flowchart of a method of the present disclosure;Fig. 1 is a flowchart of a method of the present disclosure;

Fig. 2 (a) is a schematic diagram for capturing an accumulated water image according to the present disclosure;Fig. 2 (a) is a schematic diagram for capturing an accumulated water image according to the present disclosure;

Fig. 2 (b) is a schematic diagram of a water seepage image for accumulated water collected on the network;Fig. 2 (b) is a schematic diagram of a water seepage image for accumulated water collected on the network;

Fig. 2 (c) is a schematic diagram of fused sample data enhancement according to the present disclosure;Fig. 2 (c) is a schematic diagram of fused sample data enhancement according to the present disclosure;

Fig. 3 is a schematic diagram of an image of a water-accumulated region labeled by the Labelme software;Fig. 3 is a schematic diagram of an image of a water-accumulated region labeled by the Labelme software;

Fig. 4 is a structural schematic diagram of a MaskRCNN model;Fig. 4 is a structural schematic diagram of a MaskRCNN model;

Fig. 5 is a structural schematic diagram of an MBLLEN model;Fig. 5 is a structural schematic diagram of an MBLLEN model;

Fig. 6 (a) is a low-light image; andFig. 6 (a) is a low-light image; and

Fig. 6 (b) is an enhanced image. 4Fig. 6 (b) is an enhanced image. 4

Detailed Description of the Embodiments LUS05937Detailed Description of the Embodiments LUS05937

Hereinafter, the present disclosure will be explained in detail with reference to the embodiments and the accompanying drawings, but the embodiments of the present disclosure are not limited thereto.Hereinafter, the present disclosure will be explained in detail with reference to the embodiments and the accompanying drawings, but the embodiments of the present disclosure are not limited thereto.

EmbodimentEmbodiment

As shown in Fig. 1, the MaskRCNN water seepage detection method based on low-light compensation in the present embodiment includes the following steps.As shown in Fig. 1, the MaskRCNN water seepage detection method based on low-light compensation in the present embodiment includes the following steps.

At S1, an accumulated water image is captured, a similar water seepage image for accumulated water is collected on the network, and a sample dataset is expanded using fused sample data enhancement.At S1, an accumulated water image is captured, a similar water seepage image for accumulated water is collected on the network, and a sample dataset is expanded using fused sample data enhancement.

At S2, data labeling is performed on the extended dataset using Lableme and a labeled file of a water-accumulated region is generated.At S2, data labeling is performed on the extended dataset using Lableme and a labeled file of a water-accumulated region is generated.

At S3, an enhancement operation is performed on the labeled dataset, operations of flipping, scaling, gamut changing, and the like are performed on the image, and the image is restored to a pixel size of an original image upon completion of the operations.At S3, an enhancement operation is performed on the labeled dataset, operations of flipping, scaling, gamut changing, and the like are performed on the image, and the image is restored to a pixel size of an original image upon completion of the operations.

At S4, a MaskRCNN model is trained using the enhanced dataset.At S4, a MaskRCNN model is trained using the enhanced dataset.

At S5, the image to be detected is imported into an MBLLEN model, a feature map of each layer is obtained through layers of a feature extraction module FEM, an image of each feature map enhanced by a low light is obtained through layers of an enhancement module EM, the enhanced feature maps are input into layers of a fusion module FM, and a final water seepage region image enhanced by the low light is obtained.At S5, the image to be detected is imported into an MBLLEN model, a feature map of each layer is obtained through layers of a feature extraction module FEM, an image of each feature map enhanced by a low light is obtained through layers of an enhancement module EM, the enhanced feature maps are input into layers of a fusion module FM, and a final water seepage region image enhanced by the low light is obtained.

At S6, the final output water seepage image enhanced by the low light is imported into the MaskRCNN model, a feature map is obtained by convolution 5 calculation on the water seepage image, a region proposal is obtained through RPN, LUS05937 and a final water seepage region position is obtained through an ROI Align layer.At S6, the final output water seepage image enhanced by the low light is imported into the MaskRCNN model, a feature map is obtained by convolution 5 calculation on the water seepage image, a region proposal is obtained through RPN, LUS05937 and a final water seepage region position is obtained through an ROI Align layer.

As shown in Fig. 2 (a), Fig. 2 (b), and Fig. 2 (c), in this embodiment, a specific process of the fused sample data enhancement in S1 is as follows.As shown in Fig. 2 (a), Fig. 2 (b), and Fig. 2 (c), in this embodiment, a specific process of the fused sample data enhancement in S1 is as follows.

Two images are randomly selected from a training set and are enhanced using a data enhancement method including flipping, adding noise, cropping, and the like, and the two accumulated water images are fused according to random weights to increase sample diversity; specifically, a formula for fusing the two images is as follows:Two images are randomly selected from a training set and are enhanced using a data enhancement method including flipping, adding noise, cropping, and the like, and the two accumulated water images are fused according to random weights to increase sample diversity; Specifically, a formula for fusing the two images is as follows:

Image(R,G, B) =nxImagel(R,G, B) + (1-177) Image2(R,G, B) 7 = rand(0.3—0.7) (1) where Image(R.G,B) ; the fused image, Imagel(R,G,B) andImage(R,G, B) = nxImagel(R,G, B) + (1-177) Image2(R,G, B) 7 = rand(0.3—0.7) (1) where Image(R.G,B) ;the fused image, Imagel(R,G,B) and

Image2(R,G,B) are the original two images, 7 = rand(0.3—0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R,G,B are three channels of the images.Image2(R,G,B) are the original two images, 7 = rand(0.3—0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R,G,B are three channels of the images.

As shown in Fig. 3, in this embodiment, a specific process of the data labeling inAs shown in Fig. 3, in this embodiment, a specific process of the data labeling in

S2 is as follows. the water seepage region is labeled by a multi-line and multi-point method; a polygonal contour of the water seepage region is labeled using a labeling toolS2 is as follows. the water seepage region is labelled by a multi-line and multi-point method; a polygonal contour of the water seepage region is labelled using a labeling tool

Lableme, a label name of the water seepage contour is set, a corresponding json file to the labeled sample is generated, and contour and image information of the object region in the sample is stored by the json file.Accordingly, a label name of the water seepage contour is set, a corresponding json file to the labeled sample is generated, and contour and image information of the object region in the sample is stored by the json file.

As shown in Fig. 4, in this embodiment, the MaskRCNN is an example segmentation framework for performing effective object detection and accurately segmenting boundaries of the object region. The MaskRCNN model mainly includes a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the 6As shown in Fig. 4, in this embodiment, the MaskRCNN is an example segmentation framework for performing effective object detection and accurately segmenting boundaries of the object region. The MaskRCNN model mainly includes a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the 6

RPN is configured to generate a plurality of ROI regions; the MaskRCNN replaces LUS05937RPN is configured to generate a plurality of ROI regions; the MaskRCNN replaces LUS05937

Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN.Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; Finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN.

In this embodiment, a loss function Loss of the MaskRCNN is defined as:In this embodiment, a loss function Loss of the MaskRCNN is defined as:

Loss = L, + L,, + L, 2) where ““ is a classification error, x is an error generated by the positioning box, and Last is an error caused by the Mask; the classification error La, is constructed by introducing a log-likelihood loss, and a calculation formula of La, is as follows: 1 N MLoss = L, + L,, + L, 2) where ““ is a classification error, x is an error generated by the positioning box, and Last is an error caused by the Mask; the classification error La is constructed by introducing a log-likelihood loss, and a calculation formula of La is as follows: 1 N M

L,=-log P(Y|X)=-— X v,log(p;)L,=-log P(Y|X)=-— X v,log(p;)

Nine (3) where X and Y are a testing classification and an actual classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; in order to increase the robustness of the loss function, the error Lox caused by the positioning box adopts LI loss; and the relative entropy of pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Loos is obtained.Nine (3) where X and Y are a testing classification and an actual classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; in order to increase the robustness of the loss function, the error Lox caused by the positioning box adopts LI loss; and the relative entropy of pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Loos is obtained.

In order to get better generalization performance on the MaskRCNN for a small 7 labeled dataset, this embodiment introduces a pre-trained weight (mask_renn_coco.h5) LUS05937 on a COCO dataset for fine tuning. MBLLEN and MaskRCNN detection models of water-accumulated regions are used for classification. The inspection image to be detected is imported into the MBLLEN model for low-light enhancement, then the low-light enhanced image is imported into the MaskRCNN model for water-accumulated region detection, and the labeled water seepage region is output.In order to get better generalization performance on the MaskRCNN for a small 7 labeled dataset, this embodiment introduces a pre-trained weight (mask_renn_coco.h5) LUS05937 on a COCO dataset for fine tuning. MBLLEN and MaskRCNN detection models of water-accumulated regions are used for classification. The inspection image to be detected is imported into the MBLLEN model for low-light enhancement, then the low-light enhanced image is imported into the MaskRCNN model for water-accumulated region detection, and the labeled water seepage region is output.

As shown in Fig. 5, in this embodiment, the MBLLEN model is a multi-layer low-light enhancement depth learning network model. Image features of different layers are extracted by convolution calculation, and feature maps of different layers are input into a plurality of sub-networks for enhancement. The MBLLEN model mainly includes a feature extraction module (FEM), an enhancement module (EM), and a fusion module (FM). The feature extraction module FEM includes 10 layers of unidirectional network structure, where each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU. The feature extraction module does not adopt a pooling layer. The output of each layer is simultaneously the input of a convolution layer of the next feature extraction moduleAs shown in Fig. 5, in this embodiment, the MBLLEN model is a multi-layer low-light enhancement depth learning network model. Image features of different layers are extracted by convolution calculation, and feature maps of different layers are input into a plurality of sub-networks for enhancement. The MBLLEN model mainly includes a feature extraction module (FEM), an enhancement module (EM), and a fusion module (FM). The feature extraction module FEM includes 10 layers of unidirectional network structure, where each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU. The feature extraction module does not adopt a pooling layer. The output of each layer is simultaneously the input of a convolution layer of the next feature extraction module

FEM and the input of a corresponding convolution layer of the enhancement moduleFEM and the input of a corresponding convolution layer of the enhancement module

EM. Since the feature extraction module FEM includes 10 feature extraction layers, the enhancement module EM includes 10 sub-network structures with the same structure. The sub-network structures of the EM include a convolution layer, 3 convolution layers, and 3 deconvolution layers. The fusion module FM fuses all images output from the sub-networks of the EM and obtains the final enhancement result by a 3-channel convolution kernel of a size of 1x1.EM. Since the feature extraction module FEM includes 10 feature extraction layers, the enhancement module EM includes 10 sub-network structures with the same structure. The sub-network structures of the EM include a convolution layer, 3 convolution layers, and 3 deconvolution layers. The fusion module FM fuses all images output from the sub-networks of the EM and obtains the final enhancement result by a 3-channel convolution kernel of a size of 1x1.

In order to train the MBLLEN model for image low-light compensation, a structure loss (Str), a pre-training VGG content loss (VGG), and a region loss are defined respectively.In order to train the MBLLEN model for image low-light compensation, a structure loss (Str), a pre-training VGG content loss (VGG), and a region loss are defined respectively.

Specifically, the formula of the loss function is as follows:Specifically, the formula of the loss function is as follows:

Loss = Lg, + Lyon, + Lresion (4) 8 where the structure loss is mainly configured to reduce the structure distortion of LU505937 an enhanced image and a real image, and the specific formula is as follows:Loss = Lg, + Lyon, + Lresion (4) 8 where the structure loss is mainly configured to reduce the structure distortion of LU505937 an enhanced image and a real image, and the specific formula is as follows:

Lg, = Loss + us-ssma (5) where Lions is a structural similarity of the enhanced image and the real image and Loss sm is a multi-level structural similarity; the pre-training VGG content loss minimizes the absolute difference between the enhanced image and the real image output in the pre-training VGG-19 network, and the formula of the loss function is as follows: 1 W, H,; CjLg, = Loss + us-ssma (5) where Lions is a structural similarity of the enhanced image and the real image and Loss sm is a multi-level structural similarity; the pre-training VGG content loss minimizes the absolute difference between the enhanced image and the real image output in the pre-training VGG-19 network, and the formula of the loss function is as follows: 1 W, H,; Cj

Lycon.; = Te Le AE), 7 Pi (G)…] i,j rij x=1 y=l z=1 (6) where E and G are the enhanced image and the real image, respectively, Weg ,Lycon.; = Te Le AE), 7 Pi (G)…] i,j rij x=1 y=l z=1 (6) where E and G are the enhanced image and the real image, respectively, Weg ,

H, , and Gus represent dimensions of the feature map of the pre-training VGG, respectively; bi represents the j convolution layer and ith feature map of theH, , and Gus represent dimensions of the feature map of the pre-training VGG, respectively; bi represents the j convolution layer and ith feature map of the

VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss approximates a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , ,VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss approximates a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , ,

Legon F5 DONNE, D) = Go pl) + wy — ED Gui, D)Legon F5 DONNE, D) = Go pl) + wy — ED Gui, D)

MN, ia j= MR, 5 A (7) where E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non-low-light regions of the enhanced image and the real image, respectively, * and ” are 4 and 1, respectively; " is the 9 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy .MN, ia j= MR, 5 A (7) where E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non-low-light regions of the enhanced image and the real image, respectively, * and ” are 4 and 1, respectively; " is the 9 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy .

A low-light dataset is synthesized and obtained based on a PASCAL VOC dataset, and Gamma correction and Poisson noise with a peak value of 200 are added as low-light input images, and original images are added as real images. The enhanced image results for low-light water seepage images are shown in Fig. 6 (a) and Fig. 6 (b).A low-light dataset is synthesized and obtained based on a PASCAL VOC dataset, and Gamma correction and Poisson noise with a peak value of 200 are added as low-light input images, and original images are added as real images. The enhanced image results for low-light water seepage images are shown in Fig. 6 (a) and Fig. 6 (b).

Based on the same inventive concept, the present disclosure proposes aBased on the same inventive concept, the present disclosure proposes a

MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement moduleMaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement;a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module

EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a 10 final water seepage region image enhanced by the low light; and LUS05937 a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer.EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a 10 final water seepage region image enhanced by the low light; and LUS05937 a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer.

While the above embodiments are preferred embodiments of the present disclosure, the present disclosure is not limited thereto. Other changes, modifications, substitutions, combinations, and simplifications are all equivalent and can be made without departing from the spirit and principles of the present disclosure. 11While the above embodiments are preferred embodiments of the present disclosure, the present disclosure is not limited thereto. Other changes, modifications, substitutions, combinations and simplifications are all equivalent and can be made without departing from the spirit and principles of the present disclosure. 11

Claims (9)

Claims LU505937Claims LU505937 I. A MaskRCNN water seepage detection method based on low-light compensation, comprising: S1, capturing an accumulated water image, collecting a water seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region; S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final water seepage region position through an ROI Align layer.I. A MaskRCNN water seepage detection method based on low-light compensation, comprising: S1, capturing an accumulated water image, collecting a water seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region; S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final water seepage region position through an ROI Align layer. 2. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the fused sample data enhancement in S1 specifically comprises: randomly selecting two images from a training set and enhancing the same using a data enhancement method comprising flipping, adding noise, and cropping, and fusing the two accumulated water images according to random weights to increase sample diversity; a formula for fusing the two images is as follows: Im age RG Be In age HE GA + 0-0 im aged(R OF i= renee I~ OT (1) 12 wherein 1mage(R.G.B) io the fused image, Imagel(R,G,B) and 7505957 Image2(R,G,B) are the original two images, 7=rand(0.3=0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R.G.B are three channels of the image.2. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the fused sample data enhancement in S1 specifically comprises: randomly selecting two images from a training set and enhancing the same using a data enhancement method comprising flipping, adding noise, and cropping, and fusing the two accumulated water images according to random weights to increase sample diversity; a formula for fusing the two images is as follows: Im age RG Be In age HE GA + 0-0 im aged(R OF i= renee I~ OT (1) 12 wherein 1mage(R.G.B) io the fused image, Imagel( R,G,B) and 7505957 Image2(R,G,B) are the original two images, 7=rand(0.3=0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R.G.B are three channels of the image . 3. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the data labeling in S2 specifically comprises: labeling the water seepage region by a multi-line and multi-point method; labeling a polygonal contour of the water seepage region using a labeling tool Lableme, setting a label name of the water seepage contour, generating a corresponding json file to the labeled sample, and storing contour and image information of the object region in the sample by the json file.3. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the data labeling in S2 specifically comprises: labeling the water seepage region by a multi-line and multi-point method; labeling a polygonal contour of the water seepage region using a labeling tool Lableme, setting a label name of the water seepage contour, generating a corresponding json file to the labeled sample, and storing contour and image information of the object region in the sample by the json file. 4. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein in S4, the MaskRCNN model comprises a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the RPN is configured to generate a plurality of ROI regions; MaskRCNN replaces Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN.4. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein in S4, the MaskRCNN model comprises a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the RPN is configured to generate a plurality of ROI regions; MaskRCNN replaces Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; Finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN. 5. The MaskRCNN water seepage detection method based on low-light compensation according to claim 4, wherein a loss function Loss of the MaskRCNN is defined as: Loss=L, +L, +L, (2) wherein La, is a classification error, Le is an error generated by the positioning box, and Task is an error caused by the Mask; 135. The MaskRCNN water seepage detection method based on low-light compensation according to claim 4, wherein a loss function Loss of the MaskRCNN is defined as: Loss=L, +L, +L, (2) wherein La, is a classification error, Le is an error generated by the positioning box, and Task is an error caused by the Mask; 13 L LU505937 the classification error “«“ is constructed by introducing a log-likelihood loss, and a calculation formula of La, is as follows: 1 N M Ly=-log P(Y]|X)=-—=3%"y,log(p,) Nine (3) wherein X and Y are a testing classification and a real classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; the error Lyon caused by the positioning box adopts L1 loss; the relative entropy of the pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Task is obtained.L LU505937 the classification error “«“ is constructed by introducing a log-likelihood loss, and a calculation formula of La, is as follows: 1 N M Ly=-log P(Y]|X)=-—=3%"y,log(p,) Nine (3) wherein X and Y are a testing classification and a real classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; the error Lyon caused by the positioning box adopts L1 loss; the relative entropy of the pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Task is obtained. 6. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the training the MaskRCNN model in S4 comprises performing fine tuning by introducing pre-trained weights on a COCO dataset.6. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein the training the MaskRCNN model in S4 comprises performing fine tuning by introducing pre-trained weights on a COCO dataset. 7. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein implementing the MBLLEN model in S5 specifically comprises: S51, dividing the MBLLEN model into a feature extraction module FEM, an enhancement module EM, and a fusion module FM; S52, the feature extraction module FEM comprising 10 layers of unidirectional network structure, wherein each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU; the output of each layer being simultaneously the input of a convolution layer of the next feature extraction module FEM and the input of a corresponding convolution layer of the enhancement module EM; S53, the enhancement module EM comprising 10 sub-network structures with the same structure comprising a convolution layer, 3 convolution layers, and 3 deconvolution layers; and 147. The MaskRCNN water seepage detection method based on low-light compensation according to claim 1, wherein implementing the MBLLEN model in S5 specifically comprises: S51, dividing the MBLLEN model into a feature extraction module FEM, an enhancement module EM, and a fusion module FM; S52, the feature extraction module FEM comprising 10 layers of unidirectional network structure, wherein each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU; the output of each layer being simultaneously the input of a convolution layer of the next feature extraction module FEM and the input of a corresponding convolution layer of the enhancement module EM; S53, the enhancement module EM comprising 10 sub-network structures with the same structure comprising a convolution layer, 3 convolution layers, and 3 deconvolution layers; and 14 S54, the fusion module FM fusing all images output from the sub-networks of LU505937 the enhancement module EM, and obtaining a final enhancement result by a 3-channel convolution kernel of a size of 1x1.S54, the fusion module FM fusing all images output from the sub-networks of LU505937 and the enhancement module EM, and obtaining a final enhancement result by a 3-channel convolution kernel of size 1x1. 8. The MaskRCNN water seepage detection method based on low-light compensation according to claim 7, wherein the training the MBLLEN model comprises: defining a structure loss, a pre-training VGG content loss, and a region loss; the formula of the loss function is as follows: Loss = Lg, + Lys; ; + region (4) wherein a specific formula of the structure loss is as follows: Lg, = Loss + us-ssma (5) wherein Lions is a structural similarity between an enhanced image and a real image and Loss sm is a multi-level structural similarity; a formula for the pre-training VGG content loss is as follows: 1 W, H,; Cj Lycon.; = Te Le AE), 7 Pi (G)…] ij Ci x=1 yal = (6) wherein E and G are the enhanced image and the real image, respectively, Wis , H, , and Gus represent dimensions of the feature map of the pre-training VGG respectively; bi represents the j convolution layer and ith feature map of the VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss obtains a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , , Ligon TV DE DAL + wy — ED (En (is 7) +6, 6. DI) mn i=l j=1 myn, i=l j=1 (7) wherein E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non-low-light regions of the enhanced image and the real image, respectively, “i and "# are 4 and 1, respectively; ML js the 15 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy .8. The MaskRCNN water seepage detection method based on low-light compensation according to claim 7, wherein the training the MBLLEN model comprises: defining a structure loss, a pre-training VGG content loss, and a region loss; the formula of the loss function is as follows: Loss = Lg, + Lys; ; + region (4) wherein a specific formula of the structure loss is as follows: Lg, = Loss + us-ssma (5) wherein Lions is a structural similarity between an enhanced image and a real image and Loss sm is a multi-level structural similarity; a formula for the pre-training VGG content loss is as follows: 1 W, H,; Cj Lycon.; = Te Le AE), 7 Pi (G)…] ij Ci x=1 yal = (6) wherein E and G are the enhanced image and the real image, respectively, Wis , H, , and Gus represent dimensions of the feature map of the pre-training VGG respectively; bi represents the j convolution layer and ith feature map of the VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss obtains a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , , Ligon TV DE DAL + wy — ED (En (is 7 ) +6, 6. DI) mn i=l j=1 myn, i=l j=1 (7) wherein E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non -low-light regions of the enhanced image and the real image, respectively, “i and "# are 4 and 1, respectively; ML js the 15 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy . 9. A MaskRCNN water seepage detection system based on low-light compensation, comprising: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 169. A MaskRCNN water seepage detection system based on low-light compensation, comprising: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 16 Ansprüche LU505937Claims LU505937 1. MaskRCNN- Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation, dadurch gekennzeichnet, dass es umfasst: S1, Erfassen eines Bild es von angesammeltem Wasser, Sammeln eines Sickerwasserbildes für angesammeltes Wasser auf dem Netzwerk und Verbessern eines Probendatensatzes unter Verwendung von verschmolzener Probendatenverbesserung; S2, Durchführen einer Datenbeschriftung auf dem verbesserten Datensatz unter Verwendung von Lableme und Erzeugen einer beschrifteten Datei einer wasseransammelten Region; S3, Durchführen einer Verbesserungsoperation an dem beschrifteten Datensatz, Durchführen von Operationen des Spiegelns, Skalierens und Änderns der Farbskala an dem Bild, und Wiederherstellen des Bildes auf eine Pixelgröße eines Originalbildes nach Abschluss der Operationen; S4, Trainieren eines MaskRCNN-Modells unter Verwendung des verbesserten Datensatzes; S5, Importieren des zu erfassenden Bildes in ein MBLLEN-Modell, Erhalten einer Merkmalskarte jeder Schicht durch Schichten eines Merkmalsextraktionsmoduls FEM, Erhalten eines Bildes jeder Merkmalskarte, das durch ein Schwachlicht mittels Schichten eines Verbesserungsmoduls EM verbessert ist, Eingeben der verbesserten Merkmalskarten in Schichten eines Fusionsmoduls FM, und Erhalten eines endgültigen Bildes einer Wassersickerregion, das durch das Schwachlicht verbessert ist; und S6, Importieren des endgültigen, durch das Schwachlicht verbesserten Sickerwasserbildes in das MaskRCNN-Modell, Erhalten einer Merkmalskarte durch Faltungsberechnung auf dem Sickerwasserbild, Erhalten eines Regionsvorschlags durch RPN und Erhalten einer endgültigen Position der Sickerwasserregion durch eine ROI-Ausrichtungsschicht.1. A MaskRCNN method for detecting water seepage based on low light compensation, characterized by comprising: S1, acquiring an image of accumulated water, collecting a seepage image for accumulated water on the network, and enhancing a sample data set using fused sample data enhancement; S2, performing data labeling on the enhanced data set using labels and generating a labeled file of a water-accumulated region; S3, performing an enhancement operation on the labeled data set, performing operations of mirroring, scaling, and changing color scale on the image, and restoring the image to a pixel size of an original image after completing the operations; S4, training a MaskRCNN model using the enhanced data set; S5, importing the image to be captured into an MBLLEN model, obtaining a feature map of each layer by layering a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light by layering an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final image of a water seepage region enhanced by the low light; and S6, importing the final seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the seepage image, obtaining a region proposal by RPN, and obtaining a final position of the seepage region by an ROI alignment layer. 2. MaskRCNN-Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass die Verbesserung der verschmolzenen Probendaten in S1 insbesondere Folgendes umfasst: zufälliges Auswählen von zwei Bildern aus einem Trainingssatz und Verbessern derselben unter Verwendung eines Datenverbesserungsverfahrens, wobei das Datenverbesserungsverfahren Umdrehen, Hinzufügen von Rauschen und Beschneiden umfasst, und Verschmelzen der zwei akkumulierten Wasserbilder gemäß zufälligen Gewichten, um die Probenvielfalt zu erhôhen; wobei eine Formel zum Verschmelzen der zwei Bilder lautet wie folgt: Image(R, G, B) =7xImagel(R,G, B)+(1-7)Image2(R, G, B) 17 = rand(0.3—0.7) (1) wobei Image(R,G,B) das verschmolzene Bild ist, Imagel(R,G,B) und Image2(R,G,B) die beiden Originalbilder sind, 7 = rand(0.3—0.7) Zufallszahlen mit einem Verschmelzungsgewicht von 0,3 bis 0,7 darstellen und R,G,B drei Kanäle des Bildes sind.2. The MaskRCNN method for detecting seepage water based on low light compensation according to claim 1, characterized in that the enhancement of the fused sample data in S1 in particular comprises: randomly selecting two images from a training set and enhancing them using a data enhancement method, the data enhancement method comprising flipping, adding noise and cropping, and merging the two accumulated water images according to random weights to increase the sample diversity; where a formula for merging the two images is as follows: Image(R, G, B) =7xImagel(R,G, B)+(1-7)Image2(R, G, B) 17 = rand(0.3—0.7) (1) where Image(R,G,B) is the merged image, Imagel(R,G,B) and Image2(R,G,B) are the two original images, 7 = rand(0.3—0.7) represent random numbers with a merging weight of 0.3 to 0.7, and R,G,B are three channels of the image. 3. MaskRCNN- Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass die Datenbeschriftung in S2 insbesondere umfasst: Beschriften der Sickerwasserregion durch ein Mehrlinien- und Mehrpunktverfahren; Beschriften einer polygonalen Kontur der Sickerwasserregion unter Verwendung eines Beschriftungswerkzeugs, nämlich Lableme; Einrichten eines Beschriftungsnamens der Sickerwasserkontur; Erzeugen einer entsprechenden json-Datei für die beschriftete Probe; und Speichern von Kontur- und Bildinformationen der Objektregion in der Probe durch die json-Datei.3. The MaskRCNN seepage water detection method based on low light compensation according to claim 1, characterized in that the data labeling in S2 specifically includes: labeling the seepage water region by a multi-line and multi-point method; labeling a polygonal contour of the seepage water region using a labeling tool, namely Lableme; setting a label name of the seepage water contour; generating a corresponding json file for the labeled sample; and storing contour and image information of the object region in the sample by the json file. 4. MaskRCNN-Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass in S4 das MaskRCNN-Modell ein Merkmalsextraktions-Framework ResNet und ein RPN- Modul umfasst; wobei das ResNet Merkmale des zu erkennenden Bildes unter Verwendung einer mehrschichtigen Faltungsstruktur extraktiert, wobei das RPN ist so konfiguriert, dass es eine Vielzahl von ROI-Regionen erzeugt; wobei MaskRCNN das Rol-Pooling durch eine Rol-Align-Schicht ersetzt und die Vielzahl von ROI- Merkmalsregionen, die von der RPN-Schicht erzeugt wurden, auf eine einheitliche Größe von 7*7 unter Verwendung bilinearer Interpolation mappt; wobei schließlich die Vielzahl von ROI-Regionen, die von der RPN-Schicht erzeugt wurden, klassifiziert werden und eine Regressionsoperation einer Positionierungsbox an denselben durchgefiihrt werde, und wobei eine Maske, die der Sickerwasserregion entspricht, unter Verwendung eines neuronalen Vollfaltungsnetzwerks FCN erzeugt wird.4. The MaskRCNN method for detecting seepage water based on low light compensation according to claim 1, characterized in that in S4, the MaskRCNN model comprises a feature extraction framework ResNet and an RPN module; wherein the ResNet extracts features of the image to be detected using a multi-layer convolution structure, the RPN is configured to generate a plurality of ROI regions; wherein MaskRCNN replaces the Rol-Pooling with a Rol-Align layer and maps the plurality of ROI feature regions generated by the RPN layer to a uniform size of 7*7 using bilinear interpolation; finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a mask corresponding to the seepage water region is generated using a fully convolutional neural network FCN. 5. MaskRCNN-Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass eine Verlustfunktion Loss des MaskRCNN definiert ist als: Loss = La + Ly, + Last (2) wobei La ein Klassifizierungsfehler ist, Lie ein durch die Positionierungsbox erzeugter Fehler ist und Ent ein durch die Maske verursachter Fehler ist; wobei der Klassifizierungsfehler Las durch Einführung eines Log-Likelihood- Verlustes konstruiert wird, und eine Berechnungsformel von La lautet wie folgt:5. The MaskRCNN method for detecting water seepage based on low light compensation according to claim 1, characterized in that a loss function Loss of the MaskRCNN is defined as: Loss = La + Ly, + Last (2) where La is a classification error, Lie is an error generated by the positioning box, and Ent is an error caused by the mask; the classification error Las is constructed by introducing a log-likelihood loss, and a calculation formula of La is as follows: Xu LU505937 L,=~logP (Y|X) = -—> > y, log (ps) NS j=1 (3) wobei X und Ÿ jeweilig eine Testklassifikation bzw. eine reale Klassifikation sind, N die Anzahl der Eingangsproben ist, M die Anzahl der möglichen Klassen ist, und Pi die Wahrscheinlichkeitsverteilung der Klasse j darstellt, die durch ein Modell der Probe xi vorhergesagt und ausgegeben wird; wobei Yi darstellt, ob die wahre Klasse der Probe xi die Klasse j ist; wobei der durch die Positionierungsbox verursachte Fehler Lie den L1-Verlust annimmt; die relative Entropie der Pixel in der ROI-Region unter Verwendung einer Sigmoidfunktion berechnet wird und der durchschnittliche relative Entropiefehler Last erhalten wird.Xu LU505937 L,=~logP (Y|X) = -—> > y, log (ps) NS j=1 (3) where X and Ÿ are a test classification and a real classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; where Yi represents whether the true class of sample xi is class j; where the error Lie caused by the positioning box assumes the L1 loss; the relative entropy of pixels in the ROI region is calculated using a sigmoid function, and the average relative entropy error Last is obtained. 6. MaskRCNN-Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass das Trainieren des MaskRCNN-Modells in S4 das Durchführen einer Feinabstimmung durch Einführen von vortrainierten Gewichten auf einem COCO-Datensatz umfasst.6. MaskRCNN method for detecting water seepage based on low light compensation according to claim 1, characterized in that training the MaskRCNN model in S4 comprises performing fine-tuning by introducing pre-trained weights on a COCO dataset. 7. MaskRCNN-Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 1, dadurch gekennzeichnet, dass das Implementieren des MBLLEN-Modells in S5 insbesondere umfasst: S51, Aufteilen des MBLLEN-Modells in ein Merkmalsextraktionsmodul FEM, ein Verbesserungsmodul EM und ein Fusionsmodul FM; S52, das Merkmalsextraktionsmodul FEM umfasst 10 Schichten einer unidirektionalen Netzwerkstruktur, wobei jede Schicht 32 Faltungskerne mit einer Größe von 3 x 3 annimmt, einen Faltungstride von 1 hat und eine Aktivierungsfunktion davon ReLU ist; wobei der Ausgang jeder Schicht gleichzeitig der Eingang einer Faltungsschicht des nächsten Merkmalsextraktionsmoduls FEM und der Eingang einer entsprechenden Faltungsschicht des Verbesserungsmoduls EM ist; S53, das Verbesserungsmodul EM umfasst 10 Subnetzstrukturen mit der gleichen Struktur, die eine 1-Schicht-Faltungsschicht, eine 3-Schicht-Faltungsschicht und 3- Schicht-Entfaltungsschicht umfasst; und S54, das Fusionsmodul FM fusioniert alle Bilder, die von den Teilnetzen des Verbesserungsmoduls EM ausgegeben werden, und ein endgiiltiges Verbesserungsergebnis durch einen 3-Kanal-Faltungskern mit einer Größe von 1 x 1 erhält wird.7. MaskRCNN method for detecting water seepage based on low light compensation according to claim 1, characterized in that implementing the MBLLEN model in S5 in particular comprises: S51, dividing the MBLLEN model into a feature extraction module FEM, an enhancement module EM and a fusion module FM; S52, the feature extraction module FEM comprises 10 layers of a unidirectional network structure, each layer adopting 32 convolution kernels of size 3 x 3, having a convolution tride of 1 and an activation function thereof being ReLU; wherein the output of each layer is simultaneously the input of a convolution layer of the next feature extraction module FEM and the input of a corresponding convolution layer of the enhancement module EM; S53, the enhancement module EM includes 10 subnetwork structures with the same structure, which includes a 1-layer convolutional layer, a 3-layer convolutional layer, and a 3-layer deconvolutional layer; and S54, the fusion module FM fuses all images output from the subnetworks of the enhancement module EM, and a final enhancement result is obtained by a 3-channel convolution kernel with a size of 1 x 1. 8. MaskRCNN- Verfahren zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation nach Anspruch 7, dadurch gekennzeichnet, dass das8. MaskRCNN method for detecting seepage water based on low light compensation according to claim 7, characterized in that the Trainieren des MBLLEN-Modells umfasst: LUS05937 das Definieren eines Strukturverlustes, eines Vor-Trainings-VGG-Inhaltsverlustes und eines Regionsverlustes; wobei die Formel der Verlustfunktion wie folgt lautet: Loss = Lg, + Lyooh,j + Lpegion (4) wobei eine spezifische Formel für den Strukturverlust wie folgt lautet: 1 We; H, Cij Ligon, = WHC >>> |e. (EB), ,. —-P, G),,. ijt ii, xl y=l z=1 (5) wobei Lg, eine strukturelle Ähnlichkeit zwischen einem verbesserten Bild und einem realen Bild ist und L, son, eine mehrstufige strukturelle Ähnlichkeit ist; wobei eine Formel fiir den VGG-Inhaltsverlust vor dem Training lautet wie folgt: 1 We; H, Cij Ligon, = WHC >>> |e. (EB), ,. —-P, G),,. ijt ii, xl y=l z=1 (6) wobei E und G das verbesserte Bild bzw. das reale Bild sind, wobei W, ;,H, ;undC,, die Dimensionen der Merkmalskarte des VGG vor dem Training darstellen; wobei ®,, die j-te Faltungsschicht und die i-te Merkmalskarte des VGG-19-Netzwerks darstellt; wobei x, y und z die Breite, Höhe bzw. Kanalnummer der Merkmalskarte darstellen; wobei der Regionsverlust eine dunkle Region des gesamten Bildes durch Segmentierung von 40% des dunkelsten Pixelwertes des Bildes erhält und die folgende Verlustfunktion erhält: 1 GE . , I A , , Ligon = Wom EG DD GL pl) + m ED GG DI) mn, i=l j=1 mans i=l j=1 (7) wobei E, und G, Schwachlichtbereiche des verbesserten Bildes bzw. des realen Bildes sind, E, und G, Nicht-Schwachlichtbereiche des verbesserten Bildes bzw. des realen Bildes sind, w,undw, 4 bzw. 1 sind; m, die Breite des Bildes G, ist; n, die Höhe des Bildes G, ist; mn, die Breite des Bildes G, ist, und n, die Höhe des Bildes G, ist.Training the MBLLEN model includes: LUS05937 defining a structural loss, a pre-training VGG content loss, and a region loss; where the formula of the loss function is as follows: Loss = Lg, + Lyooh,j + Lpegion (4) where a specific formula for the structural loss is as follows: 1 We; H, Cij Ligon, = WHC >>> |e. (EB), ,. —-P, G),,. ijt ii, xl y=l z=1 (5) where Lg, is a structural similarity between an enhanced image and a real image, and L, son, is a multi-level structural similarity; where a formula for the pre-training VGG content loss is as follows: 1 We; H, Cij Ligon, = WHC >>> |e. (EB), ,. —-P, G),,. ijt ii, xl y=l z=1 (6) where E and G are the enhanced image and the real image, respectively, where W, ;,H, ;and C,, represent the dimensions of the feature map of the VGG before training; where ®,, represents the j-th convolutional layer and the i-th feature map of the VGG-19 network; where x, y and z represent the width, height and channel number of the feature map, respectively; where the region loss obtains a dark region of the whole image by segmenting 40% of the darkest pixel value of the image and obtains the following loss function: 1 GE . , I A , , Ligon = Wom EG DD GL pl) + m ED GG DI) mn, i=l j=1 mans i=l j=1 (7) where E, and G, are low-light regions of the enhanced image and the real image, respectively, E, and G, are non-low-light regions of the enhanced image and the real image, respectively, w, and w, are 4 and 1, respectively; m, is the width of the image G,; n, is the height of the image G,; mn, is the width of the image G,, and n, is the height of the image G,. 9. MaskRCNN-System zur Erkennung von versickerndem Wasser auf der Basis von Schwachlichtkompensation, dadurch gekennzeichnet, dass es umfasst: ein Modul zur Verbesserung verschmolzener Probendaten, das konfiguriert ist, um einen Probendatensatz mit einem erfassten Bild angesammelter Wasser und einem Bild LUS05937 von versickerndem Wasser für angesammeltes Wasser, das in dem Netzwerk gesammelt wurde, unter Verwendung von verschmolzener Probendatenverbesserung zu verbessern; ein Datenbeschriftungsmodul, das so konfiguriert ist, dass es eine Datenbeschriftung auf dem verbesserten Datensatz unter Verwendung von Lableme durchführt und eine beschriftete Datei einer wasseransammelten Region erzeugt;9. A MaskRCNN system for detecting water seepage based on low light compensation, characterized by comprising: a fused sample data enhancement module configured to enhance a sample data set including a captured water-accumulated image and a water-accumulated image LUS05937 for accumulated water collected in the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the enhanced data set using Lableme and generate a labeled file of a water-accumulated region; ein Verbesserungsoperationsmodul, das so konfiguriert ist, dass es eine Verbesserungsoperation an dem beschrifteten Datensatz durchführt, es Spiegelns, Skalierens und Änderns der Farbskala an dem Bild der Farbskala an dem Bild durchführt und das Bild nach Abschluss der Operationen auf eine Pixelgröße eines Originalbildes wiederherstellt;an enhancement operation module configured to perform an enhancement operation on the labeled data set, perform mirroring, scaling, and changing the color gamut on the image, and restore the image to a pixel size of an original image after completion of the operations; ein MaskRCNN-Modell-Trainingsmodul, das zum Trainieren eines MaskRCNN- Modells unter Verwendung des verbesserten Datensatzes konfiguriert ist;a MaskRCNN model training module configured to train a MaskRCNN model using the improved dataset; ein Modul zur Erfassung von Bildern der Sickerwasserregion mit verbessertem Schwachlicht, das so konfiguriert ist, dass es das zu erfassende Bild in ein MBLLEN- Modell importiert, eine Merkmalskarte jeder Schicht durch Schichten eines Merkmalsextraktionsmoduls FEM erhält, ein Bild jeder Merkmalskarte, das durch Schwachlicht verbessert wurde, durch Schichten eines Verbesserungsmoduls EM verbessert ist, die verbesserten Merkmalskarten in Schichten eines Fusionsmoduls FM eingibt und ein endgültiges Bild der Sickerwasserregion erhält, das durch das Schwachlicht verbessert wurde; und ein Modul zur Erfassung der Position der Sickerwasserregion, das so konfiguriert ist, dass es das durch das Schwachlicht verbesserte endgültige Ausgangsbild der Sickerwasserregion in das MaskRCNN-Modell importiert, eine Merkmalskarte durch Faltungsberechnung auf dem Sickerwasserbild erhält, einen Regionsvorschlag durch RPN erhält und eine endgültige Position der Sickerwasserregion durch eine ROI- Ausrichtungsschicht erhält.a seepage region image acquisition module with low-light enhanced, configured to import the image to be acquired into an MBLLEN model, obtain a feature map of each layer by layering a feature extraction module FEM, an image of each feature map enhanced by low-light enhanced by layering an enhancement module EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final image of the seepage region enhanced by the low-light; and a seepage region position acquisition module configured to import the final output image of the seepage region enhanced by the low-light into the MaskRCNN model, obtain a feature map by convolution calculation on the seepage image, obtain a region proposal by RPN, and obtain a final position of the seepage region by an ROI alignment layer.
LU505937A 2022-04-29 2022-11-25 MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION LU505937B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210464625.8A CN115240020A (en) 2022-04-29 2022-04-29 MaskRCNN water seepage detection method and system based on weak light compensation

Publications (1)

Publication Number Publication Date
LU505937B1 true LU505937B1 (en) 2024-04-29

Family

ID=83667997

Family Applications (1)

Application Number Title Priority Date Filing Date
LU505937A LU505937B1 (en) 2022-04-29 2022-11-25 MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION

Country Status (3)

Country Link
CN (1) CN115240020A (en)
LU (1) LU505937B1 (en)
WO (1) WO2023207064A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240020A (en) * 2022-04-29 2022-10-25 清远蓄能发电有限公司 MaskRCNN water seepage detection method and system based on weak light compensation
CN117315446B (en) * 2023-11-29 2024-02-09 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Reservoir spillway abnormity intelligent identification method oriented to complex environment
CN118015525B (en) * 2024-04-07 2024-06-28 深圳市锐明像素科技有限公司 Method, device, terminal and storage medium for identifying road ponding in image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020047738A1 (en) * 2018-09-04 2020-03-12 安徽中科智能感知大数据产业技术研究院有限责任公司 Automatic pest counting method based on combination of multi-scale feature fusion network and positioning model
CN110675415B (en) * 2019-12-05 2020-05-15 北京同方软件有限公司 Road ponding area detection method based on deep learning enhanced example segmentation
CN113469177B (en) * 2021-06-30 2024-04-26 河海大学 Deep learning-based drainage pipeline defect detection method and system
CN114298145B (en) * 2021-11-22 2024-07-09 三峡大学 Deep learning-based permeable concrete pore intelligent recognition and segmentation method
CN115240020A (en) * 2022-04-29 2022-10-25 清远蓄能发电有限公司 MaskRCNN water seepage detection method and system based on weak light compensation

Also Published As

Publication number Publication date
CN115240020A (en) 2022-10-25
WO2023207064A1 (en) 2023-11-02

Similar Documents

Publication Publication Date Title
LU505937B1 (en) MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION
CN112734692B (en) Defect identification method and device for power transformation equipment
CN112308860B (en) Earth observation image semantic segmentation method based on self-supervision learning
US10860879B2 (en) Deep convolutional neural networks for crack detection from image data
US20230360390A1 (en) Transmission line defect identification method based on saliency map and semantic-embedded feature pyramid
CN109872278B (en) Image cloud layer removing method based on U-shaped network and generation countermeasure network
US20220366682A1 (en) Computer-implemented arrangements for processing image having article of interest
DE112023000135T5 (en) MULTI-TASK COMPOSITE SENSING NETWORK MODEL AND RECOGNITION METHODS FOR ROAD SURFACE INFORMATION
CN110097110B (en) Semantic image restoration method based on target optimization
CN106339996A (en) Image blind defuzzification method based on hyper-Laplacian prior
Tao et al. A convolutional-transformer network for crack segmentation with boundary awareness
CN114463280A (en) Chip surface defect parallel detection method based on improved convolution variational self-encoder
CN116385404A (en) Surface defect anomaly positioning and detecting method based on image segmentation under self-supervision
CN116485802B (en) Insulator flashover defect detection method, device, equipment and storage medium
CN112102280B (en) Method for detecting loosening and loss faults of small part bearing key nut of railway wagon
CN111325724B (en) Tunnel crack region detection method and device
DE102023113166A1 (en) Image processing method and device
CN115035097B (en) Cross-scene strip steel surface defect detection method based on domain adaptation
CN111089865B (en) Defect cable detection method based on F-RCNN
CN115439737B (en) Railway box car window fault image recognition method based on image restoration
CN111222468A (en) People stream detection method and system based on deep learning
Guo et al. Research on Deep Learning-based Deraining Method of Catenary Images
CN118298184B (en) Hierarchical error correction-based high-resolution remote sensing semantic segmentation method
CN117635470A (en) Unmanned aerial vehicle image stability enhancing method and device
CN117372720A (en) Unsupervised anomaly detection method based on multi-feature cross mask repair

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20240429