CN117409309A - Underwater structure defect detection method and device - Google Patents

Underwater structure defect detection method and device Download PDF

Info

Publication number
CN117409309A
CN117409309A CN202311454087.5A CN202311454087A CN117409309A CN 117409309 A CN117409309 A CN 117409309A CN 202311454087 A CN202311454087 A CN 202311454087A CN 117409309 A CN117409309 A CN 117409309A
Authority
CN
China
Prior art keywords
underwater
image
network
parameter estimation
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311454087.5A
Other languages
Chinese (zh)
Inventor
黄俊凯
张仲英
魏世峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huixinjia Suzhou Intelligent Technology Co ltd
Original Assignee
Huixinjia Suzhou Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huixinjia Suzhou Intelligent Technology Co ltd filed Critical Huixinjia Suzhou Intelligent Technology Co ltd
Priority to CN202311454087.5A priority Critical patent/CN117409309A/en
Publication of CN117409309A publication Critical patent/CN117409309A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting defects of an underwater structure, which comprise the following steps: acquiring surface image data of the underwater structure by using an underwater structure defect detection device; performing sharpening processing on the image by using an underwater image restoration method based on a convolutional neural network; a defect target in the identified image is detected using a lightweight-based YOLOv5 deep learning model. The invention can solve the problems of fog and blurring, serious color cast, low contrast brightness, insufficient definition and the like of the data acquired in the complex water area environment in the prior art, and further solves the problems of low detection efficiency and low detection precision caused by low imaging quality.

Description

Underwater structure defect detection method and device
Technical Field
The invention relates to the technical field of underwater defect detection, in particular to a method and a device for detecting defects of an underwater structure.
Background
For surface defects such as cracks and the like generated on the surfaces of underwater structures such as underwater parts of dam bodies, diversion tunnels, urban underwater pipe networks and the like due to long-time high-pressure water environment, detection and maintenance are needed regularly, manual maintenance is mainly relied on in the current stage of the situation, however, the method has the problems of high risk coefficient, low manual detection efficiency, high false detection rate and the like; therefore, detection methods using underwater robots are adopted, and mainly a remote unmanned submersible vehicle (ROV), a cable-free underwater robot (AUV) or a special underwater robot is used for detecting and identifying a defect target, wherein a detection system of the robot is usually based on optical vision, however, in different water areas and different depths of the same water area, due to complex water environments and large differences, problems of fog blurring, serious color cast, low contrast brightness, insufficient definition and the like of data collected in complex water area environments are easy to occur, and further, due to low imaging quality, detection efficiency is low, and detection accuracy is low.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method and a device for detecting defects of an underwater structure, which can solve the problems of fog and blurring, serious color cast, low contrast brightness, insufficient definition and the like of data acquired in the complex water area environment in the prior art, and further solve the problems of low detection efficiency and low detection precision caused by low imaging quality.
The invention is realized by the following technical scheme:
a method for detecting defects of an underwater structure comprises the following steps:
acquiring surface image data of the underwater structure by using an underwater structure defect detection device;
performing sharpening processing on the image by using an underwater image restoration method based on a convolutional neural network;
a defect target in the image is detected and identified by using a lightweight YOLOv5 deep learning model, and the defect type and relative coordinate information are output.
Further, the image data in the step S1 is RGB image data.
Further, the steps are as follows: the method for recovering the underwater image based on the convolutional neural network is used for carrying out the image sharpening process and specifically comprises the following steps:
combining the underwater optical imaging model with the RGB-D depth data set to synthesize an underwater image data set;
establishing a convolutional neural network for estimating necessary parameters of the underwater environment;
training a convolutional neural network using the synthesized underwater image dataset;
and (3) recovering the underwater image by using the trained convolutional neural network and combining an underwater optical imaging physical model to realize the definition of the image.
Further, the specific model of the underwater optical imaging model is as follows:
I c (x)=J c (x)t c (x)+B c (1-t c (x));
where x represents a single pixel point (I, j) in the image and c represents three color channels of red, green and blue (RGB) of the image, I c The method is an original underwater image acquired by a deepwater network camera; j (J) c (x)t c (x) Is a direct transmission component; b (B) c (1-t c (x) Is a back-scattered component; t is t c (x) Is the medium transmittance, expressed asWhere d (x) is depth information and is a c-channel attenuation factor.
Further, the RGB-D Depth data set includes an NYU-v2Depth data set and a Middlebury data set.
Further, the convolutional neural network includes:
the environment light parameter estimation network is used for estimating underwater environment light parameters;
a transmission map parameter estimation network for estimating transmission map parameters;
and the whole of the environment light parameter estimation network and the transmission diagram parameter estimation network adopts a coding-decoding network structure organization mode.
Further, the steps are as follows: training a convolutional neural network using a synthesized underwater image dataset, comprising:
training an ambient light parameter estimation network using the synthesized underwater image dataset: training an environment light estimation network by using synthesized underwater image RGB image data and corresponding environment light labels, wherein the environment light labels are values of three channels of RGB, three-channel values are expanded into three two-dimensional matrixes during training, single-channel environment light values are filled in each matrix, the formed environment light map and the RGB underwater image keep the same size and the same dimension, and a loss function of the environment light parameter estimation network adopts a weighted combination of average absolute error and mean square error;
training a transmission map parameter estimation network using the synthesized underwater image dataset: and training a transmissivity estimation network by using the synthesized underwater RGB image data and the corresponding multi-scale transmission map label, wherein the mixed weighting of the mean absolute error, the mean square error and the multi-scale structure similarity measure is adopted as a loss function of the transmission map parameter estimation network.
Further, the steps are as follows: the trained convolutional neural network is used, and an underwater optical imaging physical model is combined to restore an underwater image, so that the image is clear, and the method specifically comprises the following steps:
inputting the image into an ambient light parameter estimation network, and outputting an RGB three-channel ambient light matrix B r 、B g 、B b
Inputting the image into a transmission diagram parameter estimation network, and outputting a blue channel transmission diagram t b
Based on the optical characteristics of different attenuation degrees of three-channel light in water, the three-channel environment light matrix B r 、B g 、B b And blue channel transmission map t b Calculating a green channel transmission diagram t g And red channel transmission map t r The specific calculation mode is as follows:
based on an underwater optical imaging physical model: i c (x)=J c (x)t c (x)+B c (1-t c (x) Combining the estimated background light and the blue channel transmission diagram t b And the calculated green channel transmission map t g Red channel transmission diagram t r Inversion calculation results in a restored image: j (J) c =(I c -B c )/t c +B c
A detection apparatus for performing the above-described method for detecting defects of an underwater structure, comprising:
the deepwater network camera is used for acquiring surface image data of the underwater structure;
the embedded main board is internally provided with a deep learning algorithm for carrying out definition processing on the image;
and the switch is used for transmitting the surface image data of the underwater structure acquired by the deepwater network camera to the embedded main board.
Further, detection device still includes the casing, the casing includes interconnect's streamlined head and cylindrical main part, embedded mainboard and switch set up inside the cylindrical main part, the cylindrical main part outside is fixed with annular mount and is used for fixing deep water network camera.
Compared with the prior art, the invention has the advantages that:
1. the invention effectively solves the problems of low detection precision and low detection efficiency caused by low imaging quality in the complex water area environment of the prior technical means.
2. According to the invention, the underwater image restoration method based on the convolutional neural network is used for realizing the definition of the acquired underwater image data, removing the fuzzy fog effect, correcting color cast, improving the brightness, contrast and definition of the image, and providing good pretreatment support for further defect target detection and identification; the YOLOv5 target detection frame which takes the speed and the precision into consideration is adopted to carry out defect target detection and identification tasks, the model is lightened on the basis, the real-time detection capability is further improved on the basis that the detection precision is not lost, and the visual detection capability of the underwater robot is improved.
3. The detection device of the invention can be loaded on the underwater robot to provide visual system support for the underwater robot. The head of the device can reduce the underwater movement resistance, the miniature Ethernet switch module and the main board system based on the edge artificial intelligent module are carried in the main body part, support is provided for the visual deep learning algorithm to run on the edge side of the robot in real time, and the detection capability and the detection efficiency of the underwater robot are enhanced.
4. The quality of the acquired image is improved by a device integrated underwater image recovery method based on a convolutional neural network, and the vision capability of the underwater robot is enhanced; the multi-path imaging data are processed through the embedded board-mounted deep learning target detection algorithm, so that intelligent detection and identification of defects are realized, and the problem that the detection precision and efficiency are low due to low acquired image quality of the underwater robot in a complex water area environment is solved by a low-cost method independent of complex underwater imaging equipment.
Drawings
FIG. 1 is a flow chart of a method for detecting defects of an underwater structure according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an apparatus for detecting defects of an underwater structure according to an embodiment of the present invention.
1. A deepwater network camera; 2. an embedded motherboard; 3. a switch; 4. a housing; 40. a streamlined head; 41. a cylindrical body; 42. waterproof network interfaces.
Detailed Description
The technical scheme of the invention is further described in non-limiting detail below with reference to the preferred embodiments and the accompanying drawings. In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "transverse", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. refer to the azimuth or positional relationship based on the azimuth or positional relationship shown in the drawings. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
As shown in fig. 1, a method for detecting defects of an underwater structure according to an embodiment of the present invention includes the following steps:
s1: an underwater structure defect detection device is used for acquiring image data of the surface of an underwater structure.
The image data in step S1 is RGB image data, and the underwater structure defect detecting device is a deepwater webcam.
S2: and performing sharpening processing on the image by using an underwater image restoration method based on a convolutional neural network.
The step S2 specifically comprises the following steps:
s20: an underwater optical imaging model is utilized in combination with the RGB-D depth dataset to synthesize an underwater image dataset.
The specific model of the underwater optical imaging model is as follows:
I c (x)=J c (x)t c (x)+B c (1-t c (x));
where x represents a single pixel point (I, j) in the image and c represents three color channels of red, green and blue (RGB) of the image, I c The method is an original underwater image acquired by a deepwater network camera; j (J) c (x)t c (x) Is a direct transmission component; b (B) c (1-t c (x) Is a back-scattered component; t is t c (x) Is the medium transmittance, expressed asWhere d (x) is depth information and is a c-channel attenuation factor.
The RGB-D Depth data set includes an NYU-v2Depth data set and a Middlebury data set. The RGB-D depth data set provides an RGB original image and a corresponding depth image, and is combined with a data synthesis method of an underwater optical imaging physical model, an attenuation rate is preset according to a water body environment, and environment light, a transmission image and an underwater image required by training are synthesized by combining indoor RGB image and depth image data with the physical model.
Because paired training images and underwater environment parameters are difficult to obtain in a real underwater environment, a physical model-based simulation method is adopted to manufacture a synthetic underwater image data set; in this embodiment, through 1449 RGBD data based on the indoor Depth map data set NYU-v2Depth data set and 34 RGBD data based on the Middlebury data set, an attenuation rate is preset according to the water environment, three-channel ambient light values are randomly generated in a reasonable range, the ambient light, the transmission map and the synthesized underwater image required by training are synthesized by combining a physical model, 3 pairs of training data are synthesized on the basis of each pair of original data, 4449 pairs of training samples are summed, and the training set and the test set are divided according to a ratio of 7:3.
S21: a convolutional neural network is established for estimating necessary parameters of the underwater environment.
The convolutional neural network comprises an ambient light parameter estimation network and a transmission diagram parameter estimation network, wherein the ambient light parameter estimation network is used for estimating underwater ambient light parameters, and the transmission diagram parameter estimation network is used for estimating transmission diagram parameters; and the whole of the environment light parameter estimation network and the transmission diagram parameter estimation network adopts a coding-decoding network structure organization mode.
Specifically, the ambient light parameter estimation network uses a jump connection structure of feature fusion between each level of convolution blocks of the encoder and the decoder, the transmission map parameter estimation network adds a feature pyramid network at the output end of the encoder to further extract the features of the encoding output feature map, and the pyramid output feature map is sent to the decoder; the loss function of the ambient light parameter estimation network uses a weighted combination of mean absolute error (MAE, mean Absolute Error) and mean square error (MSE, mean Square Error), and the transmission map parameter estimation network uses a hybrid weighted loss function of MAE, MSE and Multi-scale structural similarity measure (MS-SSIM, multi-scale Structure Similarity Index Measure).
S22: the convolutional neural network is trained using the synthesized underwater image dataset.
The step S22 specifically includes:
training an ambient light parameter estimation network using the synthesized underwater image dataset: and training an environment light estimation network by using the synthesized underwater image RGB image data and the corresponding environment light label, wherein the environment light label is the numerical value of three RGB channels, three-channel numerical values are expanded into three two-dimensional matrixes during training, single-channel environment light values are filled in each matrix, the formed environment light map and the RGB underwater image keep the same size and the same dimension, and the loss function of the environment light parameter estimation network adopts the weighted combination of average absolute error and mean square error. In this embodiment, the ambient light parameter estimation network trains 12000 rounds, each round training 32 batches of data.
Training a transmission map parameter estimation network using the synthesized underwater image dataset: and training a transmissivity estimation network by using the synthesized underwater RGB image data and the corresponding multi-scale transmission map label, wherein the mixed weighting of the mean absolute error, the mean square error and the multi-scale structure similarity measure is adopted as a loss function of the transmission map parameter estimation network. In this embodiment, the transmission map parameter estimation network trains 15000 rounds of training 16 batches of data each.
S23: and (3) recovering the underwater image by using the trained convolutional neural network and combining an underwater optical imaging physical model to realize the definition of the image.
The step S23 specifically includes the following steps:
inputting the image into an ambient light parameter estimation network, and outputting an RGB three-channel ambient light matrix B r 、B g 、B b
Inputting the image into a transmission diagram parameter estimation network, and outputting a blue channel transmission diagram t b
Based on the optical characteristics of different attenuation degrees of three-channel light in water, the three-channel environment light matrix B r 、B g 、B b And blue channel transmission map t b Calculating a green channel transmission diagram t g And red channel transmission map t r The specific calculation mode is as follows:
based on an underwater optical imaging physical model: i c (x)=J c (x)t c (x)+B c (1-t c (x) Combining the estimated background light and the blue channel transmission diagram t b And the calculated green channel transmission map t g Red channel transmission diagram t r Inversion calculation results in a restored image: j (J) c =(I c -B c )/t c +B c
S3: a defect target in the image is detected and identified by using a lightweight YOLOv5 deep learning model, and the defect type and relative coordinate information are output.
Specifically, a core backbone feature extraction network of the YOLOv5 target detection framework is replaced by a lightweight network PeleNet to simplify three prediction branches of the YOLOv5, so that the obtained light-weight-based YOLOv5 deep learning model processes the clear underwater image, and the type and relative coordinate information of a defect target are obtained.
According to the invention, the underwater image restoration method based on the convolutional neural network is used for realizing the definition of the acquired underwater image data, removing the fuzzy fog effect, correcting color cast, improving the brightness, contrast and definition of the image, and providing good pretreatment support for further defect target detection and identification; the YOLOv5 target detection frame which takes the speed and the precision into consideration is adopted to carry out defect target detection and identification tasks, the model is lightened on the basis, the real-time detection capability is further improved on the basis that the detection precision is not lost, and the visual detection capability of the underwater robot is improved.
As shown in fig. 2, an apparatus for detecting defects of an underwater structure according to an embodiment of the present invention is configured to perform a method for detecting defects of an underwater structure, where the apparatus includes a deepwater network camera 1, an embedded motherboard 2, and a switch 3, where the deepwater network camera 1 is configured to collect image data of a surface of the underwater structure; the embedded main board 2 is internally provided with a deep learning algorithm for carrying out definition processing on the image; the switch 3 is used for transmitting the image data of the surface of the underwater structure acquired by the deepwater network camera 1 to the embedded main board 2. In the embodiment, the deep water network camera 1 is an optical deep water network camera with the model of ZF-IPC-07E11, and the embedded main board 2 is an embedded main board based on a british MLU220-SOM edge artificial intelligent module. The switch 3 is a miniature gigabit Ethernet switch.
The detection device further comprises a shell 4, the shell 4 further comprises a streamline head 40 and a cylindrical main body 41 which are connected with each other, the streamline head 40 can reduce resistance to movement of the device in water, the embedded main board 2 and the switch 3 are arranged inside the cylindrical main body 41, and an annular fixing frame (not shown) is fixed outside the cylindrical main body 41 and used for fixing the deepwater network camera 1.
In this embodiment, two ways of deep water network cameras 1 are fixed to the outside of the cylindrical main body 41, and are used for acquiring image data directly in front of and directly below the device, respectively. The rear end of the cylindrical main body 41 is provided with a closed end surface, a waterproof network interface 42 is arranged on the end surface, the waterproof network interface 42 is directly connected with the network interface of the switch 3 in the interior of the cylindrical main body 41, and the waterproof network interface 42 is directly connected with the deepwater network camera 1 in the exterior of the cylindrical main body 41. Two mounting brackets (not shown) are fixed to the outside of the cylindrical body 41, and the mounting brackets are fixedly mounted to the underwater robot.
The detection device of the invention can be loaded on the underwater robot to provide visual system support for the underwater robot. The head of the device can reduce the underwater movement resistance, the miniature Ethernet switch module and the main board system based on the edge artificial intelligent module are carried in the main body part, support is provided for the visual deep learning algorithm to run on the edge side of the robot in real time, and the detection capability and the detection efficiency of the underwater robot are enhanced. The quality of the acquired image is improved by a device integrated underwater image recovery method based on a convolutional neural network, and the vision capability of the underwater robot is enhanced; the multi-path imaging data are processed through the embedded board-mounted deep learning target detection algorithm, so that intelligent detection and identification of defects are realized, and the problem that the detection precision and efficiency are low due to low acquired image quality of the underwater robot in a complex water area environment is solved by a low-cost method independent of complex underwater imaging equipment.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. The method for detecting the defects of the underwater structure is characterized by comprising the following steps of:
acquiring surface image data of the underwater structure by using an underwater structure defect detection device;
performing sharpening processing on the image by using an underwater image restoration method based on a convolutional neural network;
a defect target in the image is detected and identified by using a lightweight YOLOv5 deep learning model, and the defect type and relative coordinate information are output.
2. The method according to claim 1, wherein the image data in the step S1 is RGB image data.
3. The method for detecting defects of an underwater structure according to claim 2, wherein the steps of: the method for recovering the underwater image based on the convolutional neural network is used for carrying out the image sharpening process and specifically comprises the following steps:
combining the underwater optical imaging model with the RGB-D depth data set to synthesize an underwater image data set;
establishing a convolutional neural network for estimating necessary parameters of the underwater environment;
training a convolutional neural network using the synthesized underwater image dataset;
and (3) recovering the underwater image by using the trained convolutional neural network and combining an underwater optical imaging physical model to realize the definition of the image.
4. A method of detecting defects in an underwater structure as claimed in claim 3, wherein the specific model of the underwater optical imaging model is as follows:
I c (x)=J c (x)t c (x)+B c (1-t c (x));
where x represents a single pixel point (I, j) in the image and c represents three color channels of red, green and blue (RGB) of the image, I c The method is an original underwater image acquired by a deepwater network camera; j (J) c (x)t c (x) Is a direct transmission component; b (B) c (1-t c (x) Is a back-scattered component; t is t c (x) Is the medium transmittance, expressed asWhere d (x) is depth information and is a c-channel attenuation factor.
5. A method of detecting a subsea structure defect according to claim 3, characterized in that the RGB-D Depth dataset comprises an NYU-v2Depth dataset and a Middlebury dataset.
6. A method of detecting a defect in an underwater structure as in claim 3 wherein the convolutional neural network comprises:
the environment light parameter estimation network is used for estimating underwater environment light parameters;
a transmission map parameter estimation network for estimating transmission map parameters;
and the whole of the environment light parameter estimation network and the transmission diagram parameter estimation network adopts a coding-decoding network structure organization mode.
7. The method for detecting defects of an underwater structure according to claim 6, wherein the steps of: training a convolutional neural network using a synthesized underwater image dataset, comprising:
training an ambient light parameter estimation network using the synthesized underwater image dataset: training an environment light estimation network by using synthesized underwater image RGB image data and corresponding environment light labels, wherein the environment light labels are values of three channels of RGB, three-channel values are expanded into three two-dimensional matrixes during training, single-channel environment light values are filled in each matrix, the formed environment light map and the RGB underwater image keep the same size and the same dimension, and a loss function of the environment light parameter estimation network adopts a weighted combination of average absolute error and mean square error;
training a transmission map parameter estimation network using the synthesized underwater image dataset: and training a transmissivity estimation network by using the synthesized underwater RGB image data and the corresponding multi-scale transmission map label, wherein the mixed weighting of the mean absolute error, the mean square error and the multi-scale structure similarity measure is adopted as a loss function of the transmission map parameter estimation network.
8. The method for detecting defects of an underwater structure according to claim 7, wherein the steps of: the trained convolutional neural network is used, and an underwater optical imaging physical model is combined to restore an underwater image, so that the image is clear, and the method specifically comprises the following steps:
inputting the image into an ambient light parameter estimation network, and outputting an RGB three-channel ambient light matrix B r 、B g 、B b
Inputting the image into a transmission diagram parameter estimation network, and outputting a blue channel transmission diagram t b
Based on the optical characteristics of different attenuation degrees of three-channel light in water, the three-channel environment light matrix B r 、B g 、B b And blue channel transmission map t b Calculating a green channel transmission diagram t g And red channel transmission map t r The specific calculation mode is as follows:
Based on an underwater optical imaging physical model: i c (x)=J c (x)t c (x)+B c (1-t c (x) Combining the estimated background light and the blue channel transmission diagram t b And the calculated green channel transmission map t g Red channel transmission diagram t r Inversion calculation results in a restored image: j (J) c =(I c -B c )/t c +B c
9. A detection apparatus for performing the method for detecting defects of an underwater structure as claimed in any one of claims 1 to 8, comprising:
the deepwater network camera (1) is used for acquiring surface image data of an underwater structure;
the embedded main board (2) is internally provided with a deep learning algorithm for carrying out definition processing on the image;
and the switch (3) is used for transmitting the image data of the surface of the underwater structure acquired by the deepwater network camera (1) to the embedded main board (2).
10. The detection device according to claim 9, further comprising a housing (4), the housing (4) comprising a streamlined head (40) and a cylindrical body (41) connected to each other, the embedded motherboard (2) and the switch (3) being arranged inside the cylindrical body (41), an annular fixing frame being fixed outside the cylindrical body (41) for fixing the deepwater network camera (1).
CN202311454087.5A 2023-11-03 2023-11-03 Underwater structure defect detection method and device Pending CN117409309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311454087.5A CN117409309A (en) 2023-11-03 2023-11-03 Underwater structure defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311454087.5A CN117409309A (en) 2023-11-03 2023-11-03 Underwater structure defect detection method and device

Publications (1)

Publication Number Publication Date
CN117409309A true CN117409309A (en) 2024-01-16

Family

ID=89497805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311454087.5A Pending CN117409309A (en) 2023-11-03 2023-11-03 Underwater structure defect detection method and device

Country Status (1)

Country Link
CN (1) CN117409309A (en)

Similar Documents

Publication Publication Date Title
CN108986136B (en) Binocular scene flow determination method and system based on semantic segmentation
CN104835158B (en) Based on the three-dimensional point cloud acquisition methods of Gray code structured light and epipolar-line constraint
CN110942484B (en) Camera self-motion estimation method based on occlusion perception and feature pyramid matching
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN110889844B (en) Coral distribution and health condition assessment method based on deep clustering analysis
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
CN116434088A (en) Lane line detection and lane auxiliary keeping method based on unmanned aerial vehicle aerial image
CN111709888A (en) Aerial image defogging method based on improved generation countermeasure network
WO2021058347A1 (en) Method and system for colour video processing
CN114526728B (en) Monocular vision inertial navigation positioning method based on self-supervision deep learning
CN115035172A (en) Depth estimation method and system based on confidence degree grading and inter-stage fusion enhancement
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
CN112767267B (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN114202472A (en) High-precision underwater imaging method and device
CN112270691B (en) Monocular video structure and motion prediction method based on dynamic filter network
CN116757949A (en) Atmosphere-ocean scattering environment degradation image restoration method and system
CN117409309A (en) Underwater structure defect detection method and device
CN113763261B (en) Real-time detection method for far small target under sea fog weather condition
CN111104532A (en) RGBD image joint recovery method based on double-current network
CN115423856A (en) Monocular depth estimation system and method for intelligent pump cavity endoscope image
GB2581549A (en) Method and system for colour video processing
CN114743105A (en) Depth privilege visual odometer method based on cross-modal knowledge distillation
CN113936022A (en) Image defogging method based on multi-modal characteristics and polarization attention
CN113034590A (en) AUV dynamic docking positioning method based on visual fusion
CN114782540B (en) Three-dimensional reconstruction camera attitude estimation method considering mechanical arm motion constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination