LU505937B1 - MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION - Google Patents
MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION Download PDFInfo
- Publication number
- LU505937B1 LU505937B1 LU505937A LU505937A LU505937B1 LU 505937 B1 LU505937 B1 LU 505937B1 LU 505937 A LU505937 A LU 505937A LU 505937 A LU505937 A LU 505937A LU 505937 B1 LU505937 B1 LU 505937B1
- Authority
- LU
- Luxembourg
- Prior art keywords
- image
- maskrcnn
- region
- enhanced
- water
- Prior art date
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 108
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000002372 labelling Methods 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims description 23
- 238000000034 method Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 16
- 230000008676 import Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 241000282320 Panthera leo Species 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims 3
- 238000013527 convolutional neural network Methods 0.000 claims 1
- 238000007689 inspection Methods 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
- G06T5/75—Unsharp masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E10/00—Energy generation through renewable energy sources
- Y02E10/20—Hydro energy
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Disclosed is a MaskRCNN water seepage detection method and system based on low-light compensation, which includes: S1, extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water accumulated region; S3, performing an enhancement operation on the labeled dataset; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model and obtaining a final image of a water seepage region enhanced by a low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final position of the water seepage region through an ROI Align layer. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.Disclosed is a MaskRCNN water seepage detection method and system based on low-light compensation, which includes: S1, extending a sample dataset using fused sample data enhancement; S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water accumulated region; S3, performing an enhancement operation on the labeled dataset; S4, training a MaskRCNN model using the enhanced dataset; S5, importing the image to be detected into an MBLLEN model and obtaining a final image of a water seepage region enhanced by a low light; and S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the water seepage image, obtaining a region proposal through RPN, and obtaining a final position of the water seepage region through an ROI Align layer. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting of an object region.
Description
Ref: PCT/CN2022/134451Ref: PCT/CN2022/134451
MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEMMASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM
BASED ON LOW-LIGHT COMPENSATIONBASED ON LOW-LIGHT COMPENSATION
The present disclosure relates to the technical field of identification algorithm optimization of inspection robots in complex indoor environments, and in particular, to a MaskRCNN water seepage detection method and system based on low-light compensation.The present disclosure relates to the technical field of identification algorithm optimization of inspection robots in complex indoor environments, and in particular, to a MaskRCNN water seepage detection method and system based on low-light compensation.
During operation, hydraulic turbine units often have frequent leakage of the spindle seal, which seriously affects the stable operation of the units. Accidents such as short circuits are more likely for hydraulic turbine layers where cables are distributed. When the leakage is large, there is a serious hidden danger of water accumulation in the hydraulic turbine layers. At the same time, the equipment failure caused by the dripping and leaking of equipment in the hydraulic turbine layers should be timely overhauled to maintain the stable operation of production. Therefore, it will effectively improve the safety factor and reduce the economic loss and potential safety hazards caused by water seepage and leakage by regular and comprehensive dripping and leaking inspection of the inspection region of the equipment in the hydraulic turbine layers, analysis of the overall water seepage situation, timely repair and maintenance according to the water seepage form and degree thereof. However, due to poor light conditions in the hydraulic turbine layers, the images taken by the inspection robots make it difficult to distinguish the boundaries of the water seepage and leakage regions even in the case of light supplements.During operation, hydraulic turbine units often have frequent leakage of the spindle seal, which seriously affects the stable operation of the units. Accidents such as short circuits are more likely for hydraulic turbine layers where cables are distributed. When the leakage is large, there is a serious hidden danger of water accumulation in the hydraulic turbine layers. At the same time, the equipment failure caused by the dripping and leaking of equipment in the hydraulic turbine layers should be timely overhauled to maintain the stable operation of production. Therefore, it will effectively improve the safety factor and reduce the economic loss and potential safety hazards caused by water seepage and leakage through regular and comprehensive dripping and leaking inspection of the inspection region of the equipment in the hydraulic turbine layers, analysis of the overall water seepage situation, timely repair and maintenance according to the water seepage form and degree thereof. However, due to poor light conditions in the hydraulic turbine layers, the images taken by the inspection robots make it difficult to distinguish the boundaries of the water seepage and leakage regions even in the case of light supplements.
Summary of the Invention 1Summary of the Invention 1
In order to solve the technical problems in the prior art, the present disclosure LU505937 provides a MaskRCNN water seepage detection method and system based on low-light compensation. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.In order to solve the technical problems in the prior art, the present disclosure LU505937 provides a MaskRCNN water seepage detection method and system based on low-light compensation. A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting of an object region.
The present disclosure is achieved by the following technical solutions: aThe present disclosure is achieved by the following technical solutions: a
MaskRCNN water seepage detection method based on low-light compensation, including:MaskRCNN water seepage detection method based on low-light compensation, including:
S1, capturing an accumulated water image, collecting a seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement;S1, capturing an accumulated water image, collecting a seepage image for accumulated water on the network, and extending a sample dataset using fused sample data enhancement;
S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region;S2, performing data labeling on the extended dataset using Lableme and generating a labeled file of a water-accumulated region;
S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations;S3, performing an enhancement operation on the labeled dataset, performing operations of flipping, scaling, and gamut changing on the image, and restoring the image to a pixel size of an original image upon completion of the operations;
S4, training a MaskRCNN model using the enhanced dataset;S4, training a MaskRCNN model using the enhanced dataset;
S5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; andS5, importing the image to be detected into an MBLLEN model, obtaining a feature map of each layer through layers of a feature extraction module FEM, obtaining an image of each feature map enhanced by a low light through layers of an enhancement module EM, inputting the enhanced feature maps into layers of a fusion module FM, and obtaining a final water seepage region image enhanced by the low light; and
S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the 2 water seepage image, obtaining a region proposal through RPN, and obtaining a final LUS05937 water seepage region position through an ROI Align layer.S6, importing the final output water seepage image enhanced by the low light into the MaskRCNN model, obtaining a feature map by convolution calculation on the 2 water seepage image, obtaining a region proposal through RPN, and obtaining a final LUS05937 water seepage region position through an ROI Align layer.
The system of the present disclosure is achieved by the following technical solutions: a MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement moduleThe system of the present disclosure is achieved by the following technical solutions: a MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module
EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 3EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a final water seepage region image enhanced by the low light; and a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer. 3
The present disclosure has the following advantages and beneficial effects compared to the prior art.The present disclosure has the following advantages and beneficial effects compared to the prior art.
A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.A low-light enhancement is performed by importing an inspection image to be detected into an MBLLEN model, and then the low-light enhanced image is imported into a MaskRCNN model for water-accumulated region detection, thus not only performing effective object detection, but also accurately segmenting boundaries of an object region.
Fig. 1 is a flowchart of a method of the present disclosure;Fig. 1 is a flowchart of a method of the present disclosure;
Fig. 2 (a) is a schematic diagram for capturing an accumulated water image according to the present disclosure;Fig. 2 (a) is a schematic diagram for capturing an accumulated water image according to the present disclosure;
Fig. 2 (b) is a schematic diagram of a water seepage image for accumulated water collected on the network;Fig. 2 (b) is a schematic diagram of a water seepage image for accumulated water collected on the network;
Fig. 2 (c) is a schematic diagram of fused sample data enhancement according to the present disclosure;Fig. 2 (c) is a schematic diagram of fused sample data enhancement according to the present disclosure;
Fig. 3 is a schematic diagram of an image of a water-accumulated region labeled by the Labelme software;Fig. 3 is a schematic diagram of an image of a water-accumulated region labeled by the Labelme software;
Fig. 4 is a structural schematic diagram of a MaskRCNN model;Fig. 4 is a structural schematic diagram of a MaskRCNN model;
Fig. 5 is a structural schematic diagram of an MBLLEN model;Fig. 5 is a structural schematic diagram of an MBLLEN model;
Fig. 6 (a) is a low-light image; andFig. 6 (a) is a low-light image; and
Fig. 6 (b) is an enhanced image. 4Fig. 6 (b) is an enhanced image. 4
Detailed Description of the Embodiments LUS05937Detailed Description of the Embodiments LUS05937
Hereinafter, the present disclosure will be explained in detail with reference to the embodiments and the accompanying drawings, but the embodiments of the present disclosure are not limited thereto.Hereinafter, the present disclosure will be explained in detail with reference to the embodiments and the accompanying drawings, but the embodiments of the present disclosure are not limited thereto.
As shown in Fig. 1, the MaskRCNN water seepage detection method based on low-light compensation in the present embodiment includes the following steps.As shown in Fig. 1, the MaskRCNN water seepage detection method based on low-light compensation in the present embodiment includes the following steps.
At S1, an accumulated water image is captured, a similar water seepage image for accumulated water is collected on the network, and a sample dataset is expanded using fused sample data enhancement.At S1, an accumulated water image is captured, a similar water seepage image for accumulated water is collected on the network, and a sample dataset is expanded using fused sample data enhancement.
At S2, data labeling is performed on the extended dataset using Lableme and a labeled file of a water-accumulated region is generated.At S2, data labeling is performed on the extended dataset using Lableme and a labeled file of a water-accumulated region is generated.
At S3, an enhancement operation is performed on the labeled dataset, operations of flipping, scaling, gamut changing, and the like are performed on the image, and the image is restored to a pixel size of an original image upon completion of the operations.At S3, an enhancement operation is performed on the labeled dataset, operations of flipping, scaling, gamut changing, and the like are performed on the image, and the image is restored to a pixel size of an original image upon completion of the operations.
At S4, a MaskRCNN model is trained using the enhanced dataset.At S4, a MaskRCNN model is trained using the enhanced dataset.
At S5, the image to be detected is imported into an MBLLEN model, a feature map of each layer is obtained through layers of a feature extraction module FEM, an image of each feature map enhanced by a low light is obtained through layers of an enhancement module EM, the enhanced feature maps are input into layers of a fusion module FM, and a final water seepage region image enhanced by the low light is obtained.At S5, the image to be detected is imported into an MBLLEN model, a feature map of each layer is obtained through layers of a feature extraction module FEM, an image of each feature map enhanced by a low light is obtained through layers of an enhancement module EM, the enhanced feature maps are input into layers of a fusion module FM, and a final water seepage region image enhanced by the low light is obtained.
At S6, the final output water seepage image enhanced by the low light is imported into the MaskRCNN model, a feature map is obtained by convolution 5 calculation on the water seepage image, a region proposal is obtained through RPN, LUS05937 and a final water seepage region position is obtained through an ROI Align layer.At S6, the final output water seepage image enhanced by the low light is imported into the MaskRCNN model, a feature map is obtained by convolution 5 calculation on the water seepage image, a region proposal is obtained through RPN, LUS05937 and a final water seepage region position is obtained through an ROI Align layer.
As shown in Fig. 2 (a), Fig. 2 (b), and Fig. 2 (c), in this embodiment, a specific process of the fused sample data enhancement in S1 is as follows.As shown in Fig. 2 (a), Fig. 2 (b), and Fig. 2 (c), in this embodiment, a specific process of the fused sample data enhancement in S1 is as follows.
Two images are randomly selected from a training set and are enhanced using a data enhancement method including flipping, adding noise, cropping, and the like, and the two accumulated water images are fused according to random weights to increase sample diversity; specifically, a formula for fusing the two images is as follows:Two images are randomly selected from a training set and are enhanced using a data enhancement method including flipping, adding noise, cropping, and the like, and the two accumulated water images are fused according to random weights to increase sample diversity; Specifically, a formula for fusing the two images is as follows:
Image(R,G, B) =nxImagel(R,G, B) + (1-177) Image2(R,G, B) 7 = rand(0.3—0.7) (1) where Image(R.G,B) ; the fused image, Imagel(R,G,B) andImage(R,G, B) = nxImagel(R,G, B) + (1-177) Image2(R,G, B) 7 = rand(0.3—0.7) (1) where Image(R.G,B) ;the fused image, Imagel(R,G,B) and
Image2(R,G,B) are the original two images, 7 = rand(0.3—0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R,G,B are three channels of the images.Image2(R,G,B) are the original two images, 7 = rand(0.3—0.7) represents random numbers with a fusion weight of 0.3 to 0.7, and R,G,B are three channels of the images.
As shown in Fig. 3, in this embodiment, a specific process of the data labeling inAs shown in Fig. 3, in this embodiment, a specific process of the data labeling in
S2 is as follows. the water seepage region is labeled by a multi-line and multi-point method; a polygonal contour of the water seepage region is labeled using a labeling toolS2 is as follows. the water seepage region is labelled by a multi-line and multi-point method; a polygonal contour of the water seepage region is labelled using a labeling tool
Lableme, a label name of the water seepage contour is set, a corresponding json file to the labeled sample is generated, and contour and image information of the object region in the sample is stored by the json file.Accordingly, a label name of the water seepage contour is set, a corresponding json file to the labeled sample is generated, and contour and image information of the object region in the sample is stored by the json file.
As shown in Fig. 4, in this embodiment, the MaskRCNN is an example segmentation framework for performing effective object detection and accurately segmenting boundaries of the object region. The MaskRCNN model mainly includes a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the 6As shown in Fig. 4, in this embodiment, the MaskRCNN is an example segmentation framework for performing effective object detection and accurately segmenting boundaries of the object region. The MaskRCNN model mainly includes a feature extraction framework ResNet and an RPN module; the ResNet extracts features of the image to be detected using a multi-layer convolution structure, and the 6
RPN is configured to generate a plurality of ROI regions; the MaskRCNN replaces LUS05937RPN is configured to generate a plurality of ROI regions; the MaskRCNN replaces LUS05937
Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN.Rol Pooling using an Rol Align layer, and maps the plurality of ROI feature regions generated by the RPN to a uniform size of 7*7 using bilinear interpolation; Finally, the plurality of ROI regions generated by the RPN layer are classified and a regression operation of a positioning box is performed on the same, and a Mask corresponding to the water seepage region is generated using a full convolution neural network FCN.
In this embodiment, a loss function Loss of the MaskRCNN is defined as:In this embodiment, a loss function Loss of the MaskRCNN is defined as:
Loss = L, + L,, + L, 2) where ““ is a classification error, x is an error generated by the positioning box, and Last is an error caused by the Mask; the classification error La, is constructed by introducing a log-likelihood loss, and a calculation formula of La, is as follows: 1 N MLoss = L, + L,, + L, 2) where ““ is a classification error, x is an error generated by the positioning box, and Last is an error caused by the Mask; the classification error La is constructed by introducing a log-likelihood loss, and a calculation formula of La is as follows: 1 N M
L,=-log P(Y|X)=-— X v,log(p;)L,=-log P(Y|X)=-— X v,log(p;)
Nine (3) where X and Y are a testing classification and an actual classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; in order to increase the robustness of the loss function, the error Lox caused by the positioning box adopts LI loss; and the relative entropy of pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Loos is obtained.Nine (3) where X and Y are a testing classification and an actual classification, respectively, N is the number of input samples, M is the number of possible classes, and Pi represents the probability distribution of class j predicted and output by a model of sample xi; Yi represents whether the true class of the sample xi is class j; in order to increase the robustness of the loss function, the error Lox caused by the positioning box adopts LI loss; and the relative entropy of pixels in the ROI region is calculated using a sigmoid function and the average relative entropy error Loos is obtained.
In order to get better generalization performance on the MaskRCNN for a small 7 labeled dataset, this embodiment introduces a pre-trained weight (mask_renn_coco.h5) LUS05937 on a COCO dataset for fine tuning. MBLLEN and MaskRCNN detection models of water-accumulated regions are used for classification. The inspection image to be detected is imported into the MBLLEN model for low-light enhancement, then the low-light enhanced image is imported into the MaskRCNN model for water-accumulated region detection, and the labeled water seepage region is output.In order to get better generalization performance on the MaskRCNN for a small 7 labeled dataset, this embodiment introduces a pre-trained weight (mask_renn_coco.h5) LUS05937 on a COCO dataset for fine tuning. MBLLEN and MaskRCNN detection models of water-accumulated regions are used for classification. The inspection image to be detected is imported into the MBLLEN model for low-light enhancement, then the low-light enhanced image is imported into the MaskRCNN model for water-accumulated region detection, and the labeled water seepage region is output.
As shown in Fig. 5, in this embodiment, the MBLLEN model is a multi-layer low-light enhancement depth learning network model. Image features of different layers are extracted by convolution calculation, and feature maps of different layers are input into a plurality of sub-networks for enhancement. The MBLLEN model mainly includes a feature extraction module (FEM), an enhancement module (EM), and a fusion module (FM). The feature extraction module FEM includes 10 layers of unidirectional network structure, where each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU. The feature extraction module does not adopt a pooling layer. The output of each layer is simultaneously the input of a convolution layer of the next feature extraction moduleAs shown in Fig. 5, in this embodiment, the MBLLEN model is a multi-layer low-light enhancement depth learning network model. Image features of different layers are extracted by convolution calculation, and feature maps of different layers are input into a plurality of sub-networks for enhancement. The MBLLEN model mainly includes a feature extraction module (FEM), an enhancement module (EM), and a fusion module (FM). The feature extraction module FEM includes 10 layers of unidirectional network structure, where each layer adopts 32 convolution kernels of a size of 3x3, has a stride of 1, and an activation function thereof is ReLU. The feature extraction module does not adopt a pooling layer. The output of each layer is simultaneously the input of a convolution layer of the next feature extraction module
FEM and the input of a corresponding convolution layer of the enhancement moduleFEM and the input of a corresponding convolution layer of the enhancement module
EM. Since the feature extraction module FEM includes 10 feature extraction layers, the enhancement module EM includes 10 sub-network structures with the same structure. The sub-network structures of the EM include a convolution layer, 3 convolution layers, and 3 deconvolution layers. The fusion module FM fuses all images output from the sub-networks of the EM and obtains the final enhancement result by a 3-channel convolution kernel of a size of 1x1.EM. Since the feature extraction module FEM includes 10 feature extraction layers, the enhancement module EM includes 10 sub-network structures with the same structure. The sub-network structures of the EM include a convolution layer, 3 convolution layers, and 3 deconvolution layers. The fusion module FM fuses all images output from the sub-networks of the EM and obtains the final enhancement result by a 3-channel convolution kernel of a size of 1x1.
In order to train the MBLLEN model for image low-light compensation, a structure loss (Str), a pre-training VGG content loss (VGG), and a region loss are defined respectively.In order to train the MBLLEN model for image low-light compensation, a structure loss (Str), a pre-training VGG content loss (VGG), and a region loss are defined respectively.
Specifically, the formula of the loss function is as follows:Specifically, the formula of the loss function is as follows:
Loss = Lg, + Lyon, + Lresion (4) 8 where the structure loss is mainly configured to reduce the structure distortion of LU505937 an enhanced image and a real image, and the specific formula is as follows:Loss = Lg, + Lyon, + Lresion (4) 8 where the structure loss is mainly configured to reduce the structure distortion of LU505937 an enhanced image and a real image, and the specific formula is as follows:
Lg, = Loss + us-ssma (5) where Lions is a structural similarity of the enhanced image and the real image and Loss sm is a multi-level structural similarity; the pre-training VGG content loss minimizes the absolute difference between the enhanced image and the real image output in the pre-training VGG-19 network, and the formula of the loss function is as follows: 1 W, H,; CjLg, = Loss + us-ssma (5) where Lions is a structural similarity of the enhanced image and the real image and Loss sm is a multi-level structural similarity; the pre-training VGG content loss minimizes the absolute difference between the enhanced image and the real image output in the pre-training VGG-19 network, and the formula of the loss function is as follows: 1 W, H,; Cj
Lycon.; = Te Le AE), 7 Pi (G)…] i,j rij x=1 y=l z=1 (6) where E and G are the enhanced image and the real image, respectively, Weg ,Lycon.; = Te Le AE), 7 Pi (G)…] i,j rij x=1 y=l z=1 (6) where E and G are the enhanced image and the real image, respectively, Weg ,
H, , and Gus represent dimensions of the feature map of the pre-training VGG, respectively; bi represents the j convolution layer and ith feature map of theH, , and Gus represent dimensions of the feature map of the pre-training VGG, respectively; bi represents the j convolution layer and ith feature map of the
VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss approximates a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , ,VGG-19 network; x, y, and z represent the width, height, and channel number of the feature map, respectively; the region loss approximates a dark region of the whole image by segmenting 40% darkest pixel value of the image, and obtains the following loss function: +R , , 1 te , ,
Legon F5 DONNE, D) = Go pl) + wy — ED Gui, D)Legon F5 DONNE, D) = Go pl) + wy — ED Gui, D)
MN, ia j= MR, 5 A (7) where E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non-low-light regions of the enhanced image and the real image, respectively, * and ” are 4 and 1, respectively; " is the 9 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy .MN, ia j= MR, 5 A (7) where E and G, are low-light regions of the enhanced image and the real image, respectively, Ey and Gu are non-low-light regions of the enhanced image and the real image, respectively, * and ” are 4 and 1, respectively; " is the 9 width of the image Gi, "is the height of the image Gi, Mu js the width of the 905987 image Gu :and ”# js the height of the image Gy .
A low-light dataset is synthesized and obtained based on a PASCAL VOC dataset, and Gamma correction and Poisson noise with a peak value of 200 are added as low-light input images, and original images are added as real images. The enhanced image results for low-light water seepage images are shown in Fig. 6 (a) and Fig. 6 (b).A low-light dataset is synthesized and obtained based on a PASCAL VOC dataset, and Gamma correction and Poisson noise with a peak value of 200 are added as low-light input images, and original images are added as real images. The enhanced image results for low-light water seepage images are shown in Fig. 6 (a) and Fig. 6 (b).
Based on the same inventive concept, the present disclosure proposes aBased on the same inventive concept, the present disclosure proposes a
MaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement; a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement moduleMaskRCNN water seepage detection system based on low-light compensation, including: a fused sample data enhancement module configured to extend a sample dataset with a captured accumulated water image and a water seepage image for accumulated water collected on the network using fused sample data enhancement;a data labeling module configured to perform data labeling on the extended dataset using Lableme and generate a labeled file of a water-accumulated region; an enhancement operation module configured to perform an enhancement operation on the labeled dataset, perform operations of flipping, scaling, and gamut changing on the image, and restore the image to a pixel size of an original image upon completion of the operations; a MaskRCNN model training module configured to train a MaskRCNN model using the enhanced dataset; a low-light enhanced water seepage region image acquisition module configured to import the image to be detected into an MBLLEN model, obtain a feature map of each layer through layers of a feature extraction module FEM, obtain an image of each feature map enhanced by a low light through layers of an enhancement module
EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a 10 final water seepage region image enhanced by the low light; and LUS05937 a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer.EM, input the enhanced feature maps into layers of a fusion module FM, and obtain a 10 final water seepage region image enhanced by the low light; and LUS05937 a water seepage region position acquisition module configured to import the final output water seepage image enhanced by the low light into the MaskRCNN model, obtain a feature map by convolution calculation on the water seepage image, obtain a region proposal through RPN, and obtain a final water seepage region position through an ROI Align layer.
While the above embodiments are preferred embodiments of the present disclosure, the present disclosure is not limited thereto. Other changes, modifications, substitutions, combinations, and simplifications are all equivalent and can be made without departing from the spirit and principles of the present disclosure. 11While the above embodiments are preferred embodiments of the present disclosure, the present disclosure is not limited thereto. Other changes, modifications, substitutions, combinations and simplifications are all equivalent and can be made without departing from the spirit and principles of the present disclosure. 11
Claims (9)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210464625.8A CN115240020A (en) | 2022-04-29 | 2022-04-29 | MaskRCNN water seepage detection method and system based on weak light compensation |
Publications (1)
Publication Number | Publication Date |
---|---|
LU505937B1 true LU505937B1 (en) | 2024-04-29 |
Family
ID=83667997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
LU505937A LU505937B1 (en) | 2022-04-29 | 2022-11-25 | MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN115240020A (en) |
LU (1) | LU505937B1 (en) |
WO (1) | WO2023207064A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115240020A (en) * | 2022-04-29 | 2022-10-25 | 清远蓄能发电有限公司 | MaskRCNN water seepage detection method and system based on weak light compensation |
CN117315446B (en) * | 2023-11-29 | 2024-02-09 | 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) | Reservoir spillway abnormity intelligent identification method oriented to complex environment |
CN118015525B (en) * | 2024-04-07 | 2024-06-28 | 深圳市锐明像素科技有限公司 | Method, device, terminal and storage medium for identifying road ponding in image |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020047738A1 (en) * | 2018-09-04 | 2020-03-12 | 安徽中科智能感知大数据产业技术研究院有限责任公司 | Automatic pest counting method based on combination of multi-scale feature fusion network and positioning model |
CN110675415B (en) * | 2019-12-05 | 2020-05-15 | 北京同方软件有限公司 | Road ponding area detection method based on deep learning enhanced example segmentation |
CN113469177B (en) * | 2021-06-30 | 2024-04-26 | 河海大学 | Deep learning-based drainage pipeline defect detection method and system |
CN114298145B (en) * | 2021-11-22 | 2024-07-09 | 三峡大学 | Deep learning-based permeable concrete pore intelligent recognition and segmentation method |
CN115240020A (en) * | 2022-04-29 | 2022-10-25 | 清远蓄能发电有限公司 | MaskRCNN water seepage detection method and system based on weak light compensation |
-
2022
- 2022-04-29 CN CN202210464625.8A patent/CN115240020A/en active Pending
- 2022-11-25 LU LU505937A patent/LU505937B1/en active IP Right Grant
- 2022-11-25 WO PCT/CN2022/134451 patent/WO2023207064A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
CN115240020A (en) | 2022-10-25 |
WO2023207064A1 (en) | 2023-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
LU505937B1 (en) | MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION | |
CN112734692B (en) | Defect identification method and device for power transformation equipment | |
CN112308860B (en) | Earth observation image semantic segmentation method based on self-supervision learning | |
US10860879B2 (en) | Deep convolutional neural networks for crack detection from image data | |
US20230360390A1 (en) | Transmission line defect identification method based on saliency map and semantic-embedded feature pyramid | |
CN109872278B (en) | Image cloud layer removing method based on U-shaped network and generation countermeasure network | |
US20220366682A1 (en) | Computer-implemented arrangements for processing image having article of interest | |
DE112023000135T5 (en) | MULTI-TASK COMPOSITE SENSING NETWORK MODEL AND RECOGNITION METHODS FOR ROAD SURFACE INFORMATION | |
CN110097110B (en) | Semantic image restoration method based on target optimization | |
CN106339996A (en) | Image blind defuzzification method based on hyper-Laplacian prior | |
Tao et al. | A convolutional-transformer network for crack segmentation with boundary awareness | |
CN114463280A (en) | Chip surface defect parallel detection method based on improved convolution variational self-encoder | |
CN116385404A (en) | Surface defect anomaly positioning and detecting method based on image segmentation under self-supervision | |
CN116485802B (en) | Insulator flashover defect detection method, device, equipment and storage medium | |
CN112102280B (en) | Method for detecting loosening and loss faults of small part bearing key nut of railway wagon | |
CN111325724B (en) | Tunnel crack region detection method and device | |
DE102023113166A1 (en) | Image processing method and device | |
CN115035097B (en) | Cross-scene strip steel surface defect detection method based on domain adaptation | |
CN111089865B (en) | Defect cable detection method based on F-RCNN | |
CN115439737B (en) | Railway box car window fault image recognition method based on image restoration | |
CN111222468A (en) | People stream detection method and system based on deep learning | |
Guo et al. | Research on Deep Learning-based Deraining Method of Catenary Images | |
CN118298184B (en) | Hierarchical error correction-based high-resolution remote sensing semantic segmentation method | |
CN117635470A (en) | Unmanned aerial vehicle image stability enhancing method and device | |
CN117372720A (en) | Unsupervised anomaly detection method based on multi-feature cross mask repair |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FG | Patent granted |
Effective date: 20240429 |