CN114926353B - Underwater image restoration method, device, equipment and storage medium - Google Patents
Underwater image restoration method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114926353B CN114926353B CN202210413325.7A CN202210413325A CN114926353B CN 114926353 B CN114926353 B CN 114926353B CN 202210413325 A CN202210413325 A CN 202210413325A CN 114926353 B CN114926353 B CN 114926353B
- Authority
- CN
- China
- Prior art keywords
- scene depth
- value
- image
- underwater
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000003384 imaging method Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000011084 recovery Methods 0.000 claims abstract description 20
- 230000015556 catabolic process Effects 0.000 claims abstract description 18
- 238000006731 degradation reaction Methods 0.000 claims abstract description 18
- 238000002834 transmittance Methods 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 21
- 230000004927 fusion Effects 0.000 claims description 21
- 238000012216 screening Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 description 9
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 8
- 230000002238 attenuated effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 241000519995 Stachys sylvatica Species 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009982 effect on human Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/536—Depth or shape recovery from perspective effects, e.g. by using vanishing points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method, a device, equipment and a storage medium for restoring an underwater image, wherein a background light value is obtained by acquiring pixel values of a block in an original underwater image; meanwhile, a scene depth estimation model is built, scene depth is obtained, relevant parameters such as a back scattering component estimated value, a back scattering component value and direct component transmissivity of an original underwater image are estimated based on the scene depth, and the obtained background light value, the back scattering component value and the direct component transmissivity are substituted into the built physical imaging model to carry out inversion degradation, so that a first underwater restored image is obtained. Compared with the prior art, the technical scheme of the invention predicts the relevant parameters of the underwater image by acquiring the scene depth of the underwater image, and performs inversion degradation on the constructed physical imaging model to realize recovery processing of the underwater image, thereby reducing the dependence on manpower and improving the efficiency and precision of recovery of the underwater image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an underwater image restoration method, apparatus device, and storage medium.
Background
In an underwater working scene, due to the complexity of water quality, light rays are transmitted underwater to have the phenomena of refraction, scattering and the like, so that an underwater image is easy to generate serious blurring. Because the original underwater clear image cannot be obtained and the fuzzy function which causes the degradation of the underwater image cannot be accurately measured, the application of the underwater image restoration technology is limited all the time, and the development of the underwater detection technology and the like is indirectly limited.
Most of the existing image restoration technologies are used for carrying out image modeling according to an image degradation model, and a certain cost function is required to be designed manually to optimize the inverse problem, so that an original image and a fuzzy function are estimated. If the underwater imaging physical model is introduced into the variable energy model, the data item and the smooth item based on the underwater characteristics are designed, the quick solution of the energy equation minimization extremum problem is realized by skillfully using the auxiliary variables and through the norm constraint, and the clear underwater image content is recovered, but the process is based on the artificial design constraint and the like, so that the requirements on technicians are higher, the operation is troublesome, and the image restoration effect is greatly dependent on the artificial design constraint. Therefore, further research and exploration are needed at present, and a physical model suitable for underwater image restoration is needed, so that the dependence of the image restoration effect on human is reduced, and meanwhile, the efficiency and accuracy of underwater image restoration are improved.
Disclosure of Invention
The invention aims to solve the technical problems that: according to the underwater image restoration method, the device equipment and the storage medium, the scene depth of the underwater image is obtained, the related parameters of the underwater image are estimated, and the restoration processing of the original underwater image is realized based on inversion degradation of the constructed physical imaging model, so that the dependence on manpower is reduced, and the efficiency and the accuracy of the underwater image restoration are improved.
In order to solve the technical problems, the invention provides an underwater image restoration method, which comprises the following steps:
acquiring an original underwater image, dividing the original underwater image into blocks, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
constructing a scene depth estimation model, so that the scene depth estimation model carries out scene depth estimation on the original underwater image, and outputting scene depth;
performing pixel screening on a scene depth map corresponding to the scene depth, calculating a back scattering component predicted value according to channel values corresponding to the screened pixels, and calculating a back scattering component value according to the back scattering component predicted value, the scene depth and the background light value;
Acquiring a median value of the standardized residual energy ratio in the original underwater image, taking the median value as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
and constructing a physical imaging model, substituting the background light value, the back scattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
As a possible implementation manner, the constructing a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs scene depth, specifically:
acquiring pixel values of all channels of the original underwater image in an RGB color space, and calculating and obtaining a brightness value and saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth;
Constructing a second scene depth estimation model, and inputting the pixel value into the second scene depth estimation model so that the second scene depth estimation model outputs a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
As a possible implementation manner, pixel point screening is performed on a scene depth map corresponding to the scene depth, and a back scattering component predicted value is calculated according to channel values corresponding to each screened pixel point, which specifically includes:
performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening the corresponding pixel points in each subinterval, and obtaining a channel value corresponding to the screened pixel points to obtain a back scattering component estimated value.
As a possible implementation manner, the block dividing is performed on the original underwater image, and the background light value of the original underwater image is obtained based on the pixel value of the divided block, which specifically includes:
Dividing the original underwater image into a first preset number of image blocks, calculating the pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all image blocks, carrying out iterative division on the first image block until the size of a divided second image block meets the preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as the background light value of the original underwater image.
As a possible implementation manner, after the obtaining the first underwater reconstructed image, the method further includes:
dividing the first water restoration image into a plurality of local subareas, and carrying out contrast-limiting equalization treatment on each local subarea so as to ensure that the straight side of each local subarea meets a preset contrast threshold value, thereby obtaining a second underwater restoration image;
according to a white balance method, calculating an adjustment gain corresponding to each channel pixel value of RGB in the second underwater restoration image, and adjusting the pixel value of the second underwater restoration image based on the adjustment gain to obtain a final underwater restoration image.
The embodiment of the invention also provides an underwater image restoration device, which comprises: the device comprises a background light value acquisition module, a scene depth acquisition module, a back scattering component value acquisition module, a direct component transmittance acquisition module and an underwater restoration image output module;
The background light value acquisition module is used for acquiring an original underwater image, dividing the original underwater image into blocks, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
the scene depth obtaining module is used for constructing a scene depth estimation model so that the scene depth estimation model carries out scene depth estimation on the original underwater image and outputs scene depth;
the back scattering component value acquisition module is used for screening pixels of a scene depth map corresponding to the scene depth, calculating a back scattering component predicted value according to the channel value corresponding to each screened pixel, and calculating a back scattering component value according to the back scattering component predicted value, the scene depth and the background light value;
the direct component transmittance obtaining module is used for obtaining a median value of the standardized residual energy ratio in the original underwater image, taking the median value as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
the underwater recovery image output module is used for constructing a physical imaging model, substituting the background light value, the back scattering component value and the direct component transmittance into the physical imaging model, and carrying out inversion degradation on the physical imaging model to obtain a first underwater recovery image.
As a possible implementation manner, the scene depth obtaining module is configured to construct a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs a scene depth, where the scene depth is specifically:
acquiring pixel values of all channels of the original underwater image in an RGB color space, and calculating and obtaining a brightness value and saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth;
constructing a second scene depth estimation model, and inputting the pixel value into the second scene depth estimation model so that the second scene depth estimation model outputs a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
As a possible implementation manner, the backscattering component value obtaining module is configured to perform pixel screening on a scene depth map corresponding to the scene depth, and calculate a backscattering component predicted value according to a channel value corresponding to each screened pixel, where the predicted value specifically is:
Performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening the corresponding pixel points in each subinterval, and obtaining a channel value corresponding to the screened pixel points to obtain a back scattering component estimated value.
As a possible implementation manner, the backlight value obtaining module is configured to divide the original underwater image into tiles, and obtain the backlight value of the original underwater image based on the pixel values of the divided tiles, where the backlight value is specifically:
dividing the original underwater image into a first preset number of image blocks, calculating the pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all image blocks, carrying out iterative division on the first image block until the size of a divided second image block meets the preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as the background light value of the original underwater image.
As a possible implementation manner, the underwater image restoration device provided by the embodiment of the present invention further includes: an underwater restoration image processing module;
the underwater restoration processing module is used for dividing the first water restoration image into a plurality of local subareas, and carrying out contrast-limiting equalization processing on each local subarea so as to enable the straight direction of each local subarea to meet a preset contrast threshold value and obtain a second underwater restoration image;
the underwater restoration processing module is used for calculating the adjustment gains corresponding to the pixel values of all RGB channels in the second underwater restoration image according to a white balance method, and adjusting the pixel values of the second underwater restoration image based on the adjustment gains to obtain a final underwater restoration image.
The embodiment of the invention also provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the underwater image restoration method according to any one of the above when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein the equipment where the computer readable storage medium is located is controlled to execute the underwater image restoration method according to any one of the above when the computer program runs.
Compared with the prior art, the underwater image restoration method, the underwater image restoration device and the storage medium have the following beneficial effects:
obtaining the scene depth of an original underwater image by obtaining the background light value of the original underwater image and constructing a scene depth estimation model, estimating relevant parameters such as a back scattering component predicted value, a back scattering component value, a direct component transmissivity and the like of the original underwater image based on the scene depth, substituting the obtained background light value, the back scattering component value and the direct component transmissivity into a constructed physical imaging model for inversion degradation, and obtaining a first underwater restoration image. Compared with the prior art, the technical scheme provided by the invention has the advantages that the scene depth with higher accuracy is obtained, the back scattering component predicted value, the back scattering component value, the direct component transmissivity and other relevant parameters of the original underwater image are predicted based on the scene depth, the parameter prediction process is not required to be designed manually, such as constraint, punishment and the like, so that the operation process is simple, meanwhile, the model inversion degradation is directly carried out on the constructed physical imaging model based on the obtained relevant parameters of the original underwater image, the calculated amount is small, and the acquisition efficiency and precision of the underwater restored image are improved.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of an underwater image restoration method according to the present invention;
FIG. 2 is a schematic view of an embodiment of an underwater image restoration device according to the present invention;
FIG. 3 is a schematic view of a first scene depth estimated based on a first scene depth estimation model S-I under an unmanned light source according to an embodiment of the present invention;
FIG. 4 is a schematic view of a first scene depth estimated based on a first scene depth estimation model S-I under an artificial light source according to an embodiment of the present invention;
FIG. 5 is a schematic view of a second scene depth estimated based on a second scene depth estimation model BG-R according to an embodiment of the invention;
fig. 6 is a schematic view of scene depth fusion obtained based on linear weighted fusion model fusion according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an underwater image restoration method provided by the present invention, as shown in fig. 1, the method includes steps 101 to 105, specifically as follows:
step 101: the method comprises the steps of obtaining an original underwater image, dividing the original underwater image into blocks, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks.
In one embodiment, the background light of the original underwater image is calculated using a quadtree hierarchical search method. Uniformly dividing an original underwater image into a first preset number of image blocks, calculating the pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all image blocks, carrying out iterative division on the first image block until the size of a divided second image block meets the preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as the background light value of the original underwater image.
As an illustration of this embodiment, the original underwater image is uniformly divided into four tiles, the average value of the pixel values of the RGB three channels of each tile is calculated, and the pixels are selected from the four tilesThe block with the largest value average is taken as a first block; uniformly dividing the first image block into four image blocks, carrying out iterative updating processing on the first image block based on the steps until the divided first image block meets the set threshold value, marking the first image block as a second image block, ending the dividing search operation, and taking the average value of the pixel values of RGB three channels corresponding to the finally obtained second image block as the background light value of the original underwater image Wherein the set threshold is 50.
Step 102: and constructing a scene depth estimation model so that the scene depth estimation model carries out scene depth estimation on the original underwater image and outputs scene depth.
In an embodiment, the pixel values of each channel of the original underwater image in the RGB color space are obtained, and the brightness value and the saturation of the original underwater image in the HSI color space are calculated and obtained according to a preset color space conversion formula. Specifically, an image of an original underwater image in an RGB color space is converted into an image of the original underwater image in an HSI color space, wherein HSI refers to a model of a digital image, and colors are perceived by three basic feature quantities of Hue (Hue), saturation S (Saturation) and brightness L (Intensity). The conversion formula of the luminance L and the saturation S is as follows:
R,G,B∈[0,255];
(R′,G′,B′)=(R,G,B)/255.0;
wherein R, G and B are pixel values of each channel in the RGB color space, and R ', G ', B ' are pixel values normalized by R, G and B. L is denoted as the luminance value in the HSI color space and S is denoted as the saturation value in the HSI color space.
In one embodiment, a first scene depth estimation model S-I is constructed, and the luminance value and the saturation are input into the first scene depth estimation model to enable the first scene depth estimation model to output a first scene depth D 1 The constructed first scene depth estimation model S-I is as follows:
D 1 (x)=|L(x)-S(x)|。
in an embodiment, the obtained original underwater image may be the original underwater image under the artificial light source or the original underwater image under the no artificial light source. Referring to fig. 3 and 4, fig. 3 is a schematic view of a first scene depth estimated based on a first scene depth estimation model S-I without artificial light source; FIG. 4 is a first scene depth map estimated based on a first scene depth estimation model S-I with an artificial light source. As shown in fig. 3, it can be seen that, in the case of no light source, the first scene depth map estimated based on the first scene depth estimation model better conforms to the scene depth feeling under the human eye vision, the region closer to the camera position in the real scene corresponds to the estimated low pixel value block at the bottom of the first scene depth map, and the region farther from the camera position corresponds to the estimated high pixel value block at the top of the first scene depth map. Under the condition of artificial light source irradiation, as shown in fig. 4, the first scene depth schematic diagram estimated based on the first scene depth estimation model can inhibit the light source irradiation influence to a certain extent, and the accuracy of scene depth estimation is improved.
In the embodiment, the first scene depth estimation model S-I is constructed based on the saturation and brightness of the original underwater image and the scene depth change rule, so that the accuracy and the universality of scene depth estimation are improved.
In one embodiment, a second scene depth estimation model BG-R is constructed based on an image of an original underwater image in an RGB color space, and the pixel values are input into the second scene depth estimation model to enable the second scene depth estimation model to output a second scene depth D 2 Wherein, the constructed second scene depth estimation model BG-R is shown as follows:
Wherein M is BG For maximum pixel value in the blue-green channel in RGB color space, V R For the red channel pixel value, c, in the RGB color space 1 、c 2 、c 3 Respectively are weight constant values, wherein c 1 、c 2 、c 3 The parameter values were determined based on the parameter values proposed by the Song method, 0.53214829, 0.51309827 and-0.91066194, respectively.The function representation is optimized by minimum filtering r (x) Is an r x r neighborhood centered on x, and r takes a value of 5.
Referring to fig. 5, fig. 5 is a schematic diagram of a second scene depth estimated based on a second scene depth estimation model BG-R, and as shown in fig. 5, for an underwater image with an artificial light source, the influence of light source irradiation can be effectively reduced by the second scene depth estimation model BG-R, so that the accuracy of second scene depth estimation is improved. The red channel of the image block pixel point corresponding to the near-view region greatly influenced by the light source is compensated under the RGB color space, so that the red channel pixel value has no obvious attenuation relative to the blue-green channel pixel value, namely, the difference value between the red channel component and the blue-green channel component is smaller, and the second scene depth under the irradiation of the artificial light source at the near-view position can be better reflected. For a long-range view area less affected by a light source, visible light in different wavelengths is attenuated normally, red visible light is attenuated most severely due to the longest wavelength, and a blue-green channel is attenuated relatively slowly, so that the difference value between the maximum value of the pixel value of the blue-green channel and the pixel value of the red channel is increased along with the longer depth of field, and accurate prediction of the long-range view depth is realized.
In an embodiment, a linear weighted fusion model is constructed, so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth. The fusion formula is as follows:
D(x,y)=αD 1 (x,y)+βD 2 (x,y);
where α and β are weight values, and α+β=1 is satisfied.
In the embodiment, the linear weighted summation model is utilized to fuse the first scene depth estimation model S-I and the second scene depth estimation model BG-R, so that the accuracy and the universality of scene estimation are further improved, the proposed models are all linear models in a fusion mode, and the calculation efficiency of scene depth estimation is improved.
In one embodiment, the depth of field obtained after fusing the first scene depth and the second scene depth is subjected to a guided filtering optimization process. In this embodiment, the depth of the scene obtained after fusion is subjected to the guided filtering processing, so that the depth of the underwater scene can be reflected more accurately. The formula for guided filtering is as follows:
D final (x,y)=Guidedfilter(I gray ,p,r,eps);
wherein I is gray The method is characterized in that the gray level diagram of an original underwater image is shown, p is the depth D of a fused scene, r is the radius of a local window, the value is 17, eps is a regularization parameter, and the value is 0.001.
Referring to fig. 6, fig. 6 is a schematic view of scene depth fusion based on linear weighted fusion model fusion. As shown in fig. 6, the predicted scene depth after the first scene depth predicted by the first scene depth prediction model S-I and the second scene depth predicted by the second scene depth prediction model BG-R are fused can be well represented in both a near scene area that is greatly affected by a light source and a far scene area that is less affected by the light source.
Step 103: and screening pixels of a scene depth map corresponding to the scene depth, calculating a back scattering component predicted value according to the channel value corresponding to each screened pixel, and calculating a back scattering component value according to the back scattering component predicted value, the scene depth and the background light value.
In an embodiment, performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene depth ordered sequence. Specifically, since the scene depth acquired in step 102 is two-dimensional (w, h), the two-dimensional scene depth is converted into a single-dimensional (w×h, 1) scene depth, and the scene depths are arranged in order from large to small according to the scene depth values.
In an embodiment, the scene depth ordered sequence is divided into subintervals, and the corresponding pixel points in each subinterval are screened to obtain channel values corresponding to the screened pixel points, so as to obtain a back scattering component predicted value. Specifically, the obtained scene depth sequence is evenly divided into 10 subintervals, pixel points corresponding to GRB three channels corresponding to each subinterval are obtained, the pixel points corresponding to the GRB three channels are screened, the pixel points corresponding to the RGB three channels of the first 1% are screened in each subinterval, each channel value of the screened pixel points is used as a back scattering component pre-estimation value,
In one embodiment, the backscatter component B is accurately estimated using a nonlinear least squares fit method based on the acquired backscatter component estimate, the background light value acquired in step 101, and the scene depth acquired in step 102 c Because of uncertainty in the backscattering component, in this embodiment, an error term is also added when calculating the backscattering component value by using nonlinear least squares fittingThe nonlinear least squares fit function is as follows:
wherein beta is b 、β d 、J ′c Are all unknown scalar parameters.
In the embodiment, the backward scattering component value is obtained more accurately by using the backward scattering component predicted value, the accurately estimated scene depth and the background light to perform the nonlinear least square fitting method, so that the accuracy of direct components in the subsequent physical imaging model is indirectly ensured, and the definition of the restored image obtained by performing backward solving on the physical imaging model is further improved.
Step 104: and acquiring a median value of the normalized residual energy ratio in the original underwater image, taking the median value as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance.
In one embodiment, obtaining wavelength values of light in RGB three channels corresponding to an original underwater image; based on the normalized residual energy ratio Nrer (lambda) defined in the primary ocean water quality standard, selecting a corresponding normalized residual energy ratio according to the wavelength value of light, and selecting the median value of the Nrer (lambda) in each channel of RGB as an attenuation parameter of the direct component transmittance. Wherein the normalized residual energy ratio Nrer (λ) defined in the first-order marine water quality standard is as follows:
in an embodiment, the attenuation parameters of the obtained direct component transmittance and the scene depth obtained in step 102 are imported into a transmittance formula to obtain a transmittance map, and the direct component transmittance is obtained based on the transmittance map, and meanwhile, the direct component transmittance obtained by using the guided filtering is optimized, where the transmittance formula is as follows:
t c (x)=Nrer(λ) d(x) =e -β(λ)d(x) ;
wherein t is c (x) Is a transmittance graph.
Step 105: and constructing a physical imaging model, substituting the background light value, the back scattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
In one embodiment, a physical imaging model is constructed, the background light value, the backscattering component value and the direct component transmittance are substituted into the physical imaging model, and inversion degradation is performed on the physical imaging model, where the formula of the physical imaging model is as follows:
I c (x)=D c (x)+B c (x),c∈{R,G,B};
Wherein c represents a color channel; i c Represented as an original underwater image; d (D) c Represented as a direct attenuation component; b (B) c Represented as a back-scattered component.
In one embodiment, the component D is attenuated directly c Can be expressed as:
wherein J is c (x) Clear underwater images without color bias after restoration;is the transmissivity of the direct component.
In one embodiment, for the backscatter component B c The light intensity expression can be written as:
in the method, in the process of the invention,is a background light value; />Is the transmissivity of the back-scattered component.
In one embodiment, the transmittance for the back-scattered componentAnd transmission of direct componentsRate->The existence according to Beer-Lambert law is defined as follows:
t c (x)=Nrer(λ) d(x) =e -β(λ)d(x) ;
wherein t is c (x) For the transmittance graph, nrer (λ) is a normalized residual energy ratio, the magnitude of which is determined by the wavelength of light, and represents the attenuation value of the light energy after the light is transmitted in water for a unit distance. Beta (lambda) is the attenuation coefficient, comprising the attenuation caused by absorption and scattering effects; d (x) is the scene depth, i.e., the distance between the light reflected by the target object and the camera receiving port.
In this embodiment, in order to construct an underwater physical imaging model, different transmittances are used for the direct component and the backscatter component to calculate.
In one embodiment, based on inversion degradation of the constructed physical imaging model, the final expression for the physical imaging model is:
wherein beta is d (lambda) is the attenuation coefficient of the direct component; beta b And (lambda) is the attenuation coefficient of the backscatter component.
In one embodiment, a first underwater reconstructed image J is obtained c The following is shown:
in one embodiment, after the first underwater recovery image is obtained, the contrast of the first underwater recovery image is further improved by using a limited contrast histogram equalization method.
In an embodiment, the first water restoration image is divided into a plurality of local subareas by using a limited contrast histogram equalization method, and the limited contrast equalization processing is performed on each local subarea, so that the histogram of each local subarea meets a preset contrast threshold value, and a second underwater restoration image is obtained. Specifically, dividing an image into 8 non-overlapping local sub-areas, wherein the size of the divided local sub-areas is M multiplied by N; and respectively carrying out contrast-limiting equalization processing on each non-overlapped partial subarea, wherein the contrast-limiting equalization processing comprises the following steps: generating a histogram according to the pixel value corresponding to each local sub-region, counting the histogram of each local sub-region, and setting a contrast threshold value for each local sub-region, wherein the set contrast threshold value is 2; cutting the histogram of each local subarea according to the contrast threshold value, counting the pixel values exceeding the contrast threshold value in the local subarea, redistributing the pixel values exceeding the threshold value in the histogram of each local subarea, carrying out iterative processing on the steps until the finally redistributed histogram meets the threshold condition, obtaining a central pixel point in the finally redistributed histogram, and carrying out gray value reconstruction on the central pixel point obtained by each local subarea by utilizing a bilinear interpolation method to obtain a second underwater restoration image.
In an embodiment, after the second underwater recovery image is obtained, image color optimization is further performed on the second underwater recovery image according to a white balance method.
In an embodiment, according to a white balance method, an adjustment gain corresponding to each channel pixel value of RGB in the second underwater restoration image is calculated, and based on the adjustment gain, the pixel value of the second underwater restoration image is adjusted to obtain a final underwater restoration image. Specifically, converting the image of the second underwater restoration image in RGB color space into the second underwater restoration image in YC r C b An image of a color space, wherein, in YC r C b In the color space, Y represents a luminance signal, C r Representing the red chrominance component, C b Representing the blue chrominance component. After the conversion of the color space is completed, the red chrominance components C are selected and calculated based on the reference white spots r And a blue chrominance component C b Corresponding average value M r 、M b Sum of variances D r 、D b . Wherein, the variance formula is as follows:
D r =∑ i,j (|C r (i,j)-M r |)/N;
D b =∑ i,j (|C b (i,j)-M b |)/N;
wherein N is the image J c Is a pixel count of (a).
In one embodiment, the calculated red chrominance component C is based on r And a blue chrominance component C b Corresponding average value M r 、M b Sum of variances D r 、D b Calculate the red chrominance component C r And a blue chrominance component C b A corresponding near-white region in which the red chrominance component C r And a blue chrominance component C b The expression of the corresponding near white region is as follows:
|C r (i,j)-(1.5×M r +D r ×sign(M r ))|<1.5×D r ;
|C b (i,j)-(M b +D b ×sign(M b ))|<1.5×D b 。
in one embodiment, a luminance matrix RL based on a reference white spot and a pixel point distinguishing condition are set, pixels in a near white area are distinguished, if the pixels in the near white area meet the pixel point distinguishing condition, all the pixels meeting the pixel point distinguishing condition are set as the reference white spot, the luminance component Y of all the pixels meeting the pixel point distinguishing condition is obtained, and the luminance component Y is substituted into the corresponding position of the luminance matrix RL; if the existing pixel points in the near-white region do not meet the pixel point distinguishing condition, the brightness component of the corresponding position of the pixel point which does not meet the pixel point distinguishing condition is set to 0 in the brightness matrix RL.
In one embodiment, after the setting of the luminance matrix RL is completed, the luminance values in the luminance matrix RL are ordered in the order from large to small, the first 10% of the luminance values in the luminance matrix RL are selected, and the minimum luminance value of the first 10% of the selected luminance values is defined as L min Readjusting the brightness matrix RL, whereinThe adjustment mode is if RL (i, j)<L min RL (i, j) =0; otherwise RL (i, j) =1; and acquiring the adjusted brightness matrix.
In an embodiment, the pixel values R, G, and B of the RGB three channels corresponding to the second underwater recovery image are multiplied by the adjusted luminance matrix to obtain the new pixel value R of the RGB three channels corresponding to the second underwater recovery image 2 ,G 2 ,B 2 And calculate R respectively 2 ,G 2 ,B 2 Corresponding average value R mean ,G mean ,B mean Simultaneously calculating the maximum luminance component Y max 。
In one embodiment, the average value R corresponding to the pixel values of the RGB three channels is based on the maximum brightness component mean ,G mean ,B mean Calculating the adjustment gain R of RGB three channels corresponding to the second underwater restoration image gain ,G gain ,B gain The formula is as follows:
R gain =Ymean max ;
G gain =Ymean max ;
B gain =Ymean max ;
in an embodiment, the adjustment gain R of the RGB three channels corresponding to the second underwater recovery image gain ,G gain ,B gain And adjusting the original underwater image to obtain an adjusted final underwater restoration image. The adjustment formula is as follows:
R final =R×R gain ;
G final =G×G gain ;
B final =B×B gain ;
wherein R is final 、G final 、B final Represented as pixel values for each channel in RGB color space after image white balance processing.
In summary, according to the underwater image restoration method provided by the embodiment, the background light is obtained through calculation by adopting a four-way search method; the linear weighting model is constructed to fuse the scene depths obtained by the two scene depth estimation models, and the guided filtering is utilized to optimize to obtain the accurate scene depth, and the process is simple and quick in calculation and accurate in estimation result; the nonlinear least square fitting method is utilized, and a back scattering component predicted value, an accurately estimated scene depth and a background light value are introduced at the same time, so that an accurate back scattering component value is obtained through fitting, and the definition of a restored image is greatly improved; calculating the direct component transmittance by using the scene depth map and the normalized residual energy ratio; and based on the obtained parameters, the constructed physical imaging model is subjected to inversion degradation, a physical imaging model formula is reversely pushed to obtain a first underwater restoration image, and finally, the image brightness is further improved by using a white balance and contrast-limiting histogram equalization method, and the image color is optimized, so that the obtained final underwater restoration image is clearer.
Example 2
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of an underwater image restoration device provided by the present invention, and as shown in fig. 2, the device includes a backlight value acquisition module 201, a scene depth acquisition module 202, a backscatter component value acquisition module 203, a direct component transmittance acquisition module 204, and an underwater restored image output module 205, specifically as follows:
the backlight value obtaining module 201 is configured to obtain an original underwater image, segment the original underwater image, and obtain a backlight value of the original underwater image based on pixel values of the segmented segments.
The scene depth obtaining module 202 is configured to construct a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs scene depth.
The backscattering component value obtaining module 203 performs pixel screening on a scene depth map corresponding to the scene depth, calculates a backscattering component predicted value according to the channel value corresponding to each screened pixel, and calculates a backscattering component value according to the backscattering component predicted value, the scene depth and the backlight value.
The direct component transmittance obtaining module 204 is configured to obtain a median value of the normalized residual energy ratio in the original underwater image, take the median value as an attenuation parameter of the direct component transmittance, and input the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance.
The underwater recovery image output module 205 is configured to construct a physical imaging model, substitute the background light value, the backscattering component value and the direct component transmittance into the physical imaging model, and perform inversion degradation on the physical imaging model to obtain a first underwater recovery image.
In an embodiment, the scene depth obtaining module 202 is configured to construct a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs the scene depth. Specifically, obtaining pixel values of all channels of the original underwater image in an RGB color space, and calculating and obtaining a brightness value and saturation of the original underwater image in an HSI color space according to a preset color space conversion formula; constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth; constructing a second scene depth estimation model, and inputting the pixel value into the second scene depth estimation model so that the second scene depth estimation model outputs a second scene depth; and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
In an embodiment, the backscattering component value obtaining module 203 is configured to perform pixel screening on a scene depth map corresponding to the scene depth, calculate a backscattering component predicted value according to a channel value corresponding to each screened pixel, specifically perform dimension conversion on the scene depth map corresponding to the scene depth, obtain a one-dimensional vector of the scene depth map, and sort scene depth values in the one-dimensional vector according to a preset order, so as to obtain a scene-to-depth ordered sequence; and carrying out subinterval division on the scene depth ordered sequence, screening the corresponding pixel points in each subinterval, and obtaining a channel value corresponding to the screened pixel points to obtain a back scattering component estimated value.
In an embodiment, the backlight value obtaining module 201 is configured to divide the original underwater image into tiles, obtain the backlight value of the original underwater image based on the pixel values of the divided tiles, specifically divide the original underwater image into a first preset number of tiles, calculate the pixel mean value of each tile, select a first tile with the largest pixel mean value among all the tiles, iteratively divide the first tile until the size of the divided second tile meets the preset size, calculate the pixel mean value of the second tile, and use the mean value of the second tile as the backlight value of the original underwater image.
In one embodiment, the underwater image restoration device further comprises an underwater restoration image processing module; the underwater restoration processing module is used for dividing the first water restoration image into a plurality of local subareas, and carrying out contrast-limiting equalization processing on each local subarea so as to enable the straight direction of each local subarea to meet a preset contrast threshold value and obtain a second underwater restoration image; according to a white balance method, calculating an adjustment gain corresponding to each channel pixel value of RGB in the second underwater restoration image, and adjusting the pixel value of the second underwater restoration image based on the adjustment gain to obtain a final underwater restoration image.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described apparatus, which is not described in detail herein.
It should be noted that the above embodiment of the underwater image restoration device is merely illustrative, and the modules described as separate components may or may not be physically separated, and components displayed as modules may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
On the basis of the above-mentioned embodiments of the underwater image restoration method, another embodiment of the present invention provides an underwater image restoration terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the underwater image restoration method of any one of the embodiments of the present invention.
Illustratively, in this embodiment the computer program may be partitioned into one or more modules, which are stored in the memory and executed by the processor to perform the present invention. The one or more modules may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the underwater image restoration terminal device.
The underwater image restoration terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The underwater image restoration terminal device may include, but is not limited to, a processor, a memory.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the underwater image restoration terminal apparatus, and connects the respective parts of the entire underwater image restoration terminal apparatus using various interfaces and lines.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the underwater image restoration terminal device by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (FlashCard), at least one disk storage device, flash memory device, or other volatile solid state memory device.
On the basis of the embodiment of the underwater image restoration method, another embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, and when the computer program runs, the device where the storage medium is controlled to execute the underwater image restoration method according to any one of the embodiments of the present invention.
In this embodiment, the storage medium is a computer-readable storage medium, and the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form, and so on. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
In summary, according to the underwater image restoration method, the underwater image restoration device and the storage medium, the background light value is obtained by obtaining the pixel value of the image block in the original underwater image; meanwhile, a scene depth estimation model is built, scene depth is obtained, relevant parameters such as a back scattering component estimated value, a back scattering component value and direct component transmissivity of an original underwater image are estimated based on the scene depth, and the obtained background light value, the back scattering component value and the direct component transmissivity are substituted into the built physical imaging model to carry out inversion degradation, so that a first underwater restored image is obtained. Compared with the prior art, the technical scheme of the invention predicts the relevant parameters of the underwater image by acquiring the scene depth of the underwater image, and performs inversion degradation on the constructed physical imaging model to realize recovery processing of the underwater image, thereby reducing the dependence on manpower and improving the efficiency and precision of recovery of the underwater image.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and these modifications and substitutions should also be considered as being within the scope of the present invention.
Claims (8)
1. An underwater image restoration method, comprising:
acquiring an original underwater image, dividing the original underwater image into blocks, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
acquiring pixel values of all channels of the original underwater image in an RGB color space, and calculating and obtaining a brightness value and saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
a first scene depth estimation model is constructed, the brightness value and the saturation are input into the first scene depth estimation model, so that the first scene depth estimation model outputs first scene depth, wherein the first scene depth estimation model is as follows:
in the method, in the process of the invention,for the luminance value in the HSI color space, +.>For saturation value in HSI color space, < >>Is an absolute value;
constructing a second scene depth estimation model, and inputting the pixel value into the second scene depth estimation model so that the second scene depth estimation model outputs a second scene depth, wherein the second scene depth estimation model is as follows:
In the method, in the process of the invention,for the maximum pixel value in the blue-green channel in the RGB color space +.>For the red channel pixel value in RGB color space, is->、/>、/>Respectively are weight constant values, wherein +.>、/>、/>The parameter values are determined based on the parameter values proposed by adopting the Song method, namely 0.53214829, 0.51309827 and-0.91066194 respectively; />The function represents optimization with minimum filtering, < >>Is to->Is +.>A neighborhood, r is 5;
constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain scene depth;
performing pixel screening on a scene depth map corresponding to the scene depth, calculating a back scattering component predicted value according to channel values corresponding to the screened pixels, and calculating a back scattering component value according to the back scattering component predicted value, the scene depth and the background light value;
acquiring a median value of the standardized residual energy ratio in the original underwater image, taking the median value as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
And constructing a physical imaging model, substituting the background light value, the back scattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
2. The underwater image restoration method as claimed in claim 1, wherein the pixel point screening is performed on the scene depth map corresponding to the scene depth, and the back scattering component predicted value is calculated according to the channel value corresponding to each screened pixel point, specifically:
performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening the corresponding pixel points in each subinterval, and obtaining a channel value corresponding to the screened pixel points to obtain a back scattering component estimated value.
3. The method for restoring an underwater image according to claim 1, wherein the block division is performed on the original underwater image, and the background light value of the original underwater image is obtained based on the pixel value of the divided block, specifically:
Dividing the original underwater image into a first preset number of image blocks, calculating the pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all image blocks, carrying out iterative division on the first image block until the size of a divided second image block meets the preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as the background light value of the original underwater image.
4. The method for recovering an underwater image according to claim 1, wherein after said obtaining the first underwater recovered image, further comprises:
dividing the first underwater restoration image into a plurality of local subareas, and carrying out contrast-limiting equalization processing on each local subarea so as to enable the histogram of each local subarea to meet a preset contrast threshold value, thereby obtaining a second underwater restoration image;
according to a white balance method, calculating an adjustment gain corresponding to each channel pixel value of RGB in the second underwater restoration image, and adjusting the pixel value of the second underwater restoration image based on the adjustment gain to obtain a final underwater restoration image.
5. An underwater image restoration apparatus, comprising: the device comprises a background light value acquisition module, a scene depth acquisition module, a back scattering component value acquisition module, a direct component transmittance acquisition module and an underwater restoration image output module;
The background light value acquisition module is used for acquiring an original underwater image, dividing the original underwater image into blocks, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
the scene depth acquisition module is used for acquiring pixel values of all channels of the original underwater image in an RGB color space, and calculating and obtaining the brightness value and saturation of the original underwater image in an HSI color space according to a preset color space conversion formula; a first scene depth estimation model is constructed, the brightness value and the saturation are input into the first scene depth estimation model, so that the first scene depth estimation model outputs first scene depth, wherein the first scene depth estimation model is as follows:
in the method, in the process of the invention,for the luminance value in the HSI color space, +.>For saturation value in HSI color space, < >>Is an absolute value;
constructing a second scene depth estimation model, and inputting the pixel value into the second scene depth estimation model so that the second scene depth estimation model outputs a second scene depth, wherein the second scene depth estimation model is as follows:
In the method, in the process of the invention,for the maximum pixel value in the blue-green channel in the RGB color space +.>For the red channel pixel value in RGB color space, is->、/>、/>Respectively are weight constant values, wherein +.>、/>、/>The parameter values are determined based on the parameter values proposed by adopting the Song method, namely 0.53214829, 0.51309827 and-0.91066194 respectively; />The function represents optimization with minimum filtering, < >>Is to->Is +.>A neighborhood, r is 5;
constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain scene depth;
the back scattering component value acquisition module is used for screening pixels of a scene depth map corresponding to the scene depth, calculating a back scattering component predicted value according to the channel value corresponding to each screened pixel, and calculating a back scattering component value according to the back scattering component predicted value, the scene depth and the background light value;
the direct component transmittance obtaining module is used for obtaining a median value of the standardized residual energy ratio in the original underwater image, taking the median value as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
The underwater recovery image output module is used for constructing a physical imaging model, substituting the background light value, the back scattering component value and the direct component transmittance into the physical imaging model, and carrying out inversion degradation on the physical imaging model to obtain a first underwater recovery image.
6. The underwater image restoration device as defined in claim 5, wherein the back scattering component value obtaining module is configured to perform pixel point screening on a scene depth map corresponding to the scene depth, and calculate a back scattering component predicted value according to a channel value corresponding to each screened pixel point, specifically:
performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening the corresponding pixel points in each subinterval, and obtaining a channel value corresponding to the screened pixel points to obtain a back scattering component estimated value.
7. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the underwater image restoration method according to any of claims 1 to 4 when executing the computer program.
8. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program, when run, controls a device in which the computer readable storage medium is located to perform the underwater image restoration method according to any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210413325.7A CN114926353B (en) | 2022-04-19 | 2022-04-19 | Underwater image restoration method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210413325.7A CN114926353B (en) | 2022-04-19 | 2022-04-19 | Underwater image restoration method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926353A CN114926353A (en) | 2022-08-19 |
CN114926353B true CN114926353B (en) | 2023-05-23 |
Family
ID=82807239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210413325.7A Active CN114926353B (en) | 2022-04-19 | 2022-04-19 | Underwater image restoration method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926353B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335210A (en) * | 2019-06-11 | 2019-10-15 | 长江勘测规划设计研究有限责任公司 | Underwater image restoration method |
CN114119383A (en) * | 2021-09-10 | 2022-03-01 | 大连海事大学 | Underwater image restoration method based on multi-feature fusion |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105761227B (en) * | 2016-03-04 | 2019-02-22 | 天津大学 | Underwater picture Enhancement Method based on dark channel prior and white balance |
CN108596857A (en) * | 2018-05-09 | 2018-09-28 | 西安邮电大学 | Single image to the fog method for intelligent driving |
CN108921887B (en) * | 2018-06-07 | 2022-06-24 | 上海海洋大学 | Underwater scene depth map estimation method based on underwater light attenuation priori |
CN108876743B (en) * | 2018-06-26 | 2020-12-29 | 中山大学 | Image rapid defogging method, system, terminal and storage medium |
CN111833258B (en) * | 2019-04-19 | 2023-08-25 | 中国科学院沈阳自动化研究所 | Image color correction method based on double-transmissivity underwater imaging model |
CN113888420A (en) * | 2021-09-24 | 2022-01-04 | 同济大学 | Underwater image restoration method and device based on correction model and storage medium |
CN113989164B (en) * | 2021-11-24 | 2024-04-09 | 河海大学常州校区 | Underwater color image restoration method, system and storage medium |
-
2022
- 2022-04-19 CN CN202210413325.7A patent/CN114926353B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335210A (en) * | 2019-06-11 | 2019-10-15 | 长江勘测规划设计研究有限责任公司 | Underwater image restoration method |
CN114119383A (en) * | 2021-09-10 | 2022-03-01 | 大连海事大学 | Underwater image restoration method based on multi-feature fusion |
Also Published As
Publication number | Publication date |
---|---|
CN114926353A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhou et al. | Underwater camera: Improving visual perception via adaptive dark pixel prior and color correction | |
Zhuang et al. | Underwater image enhancement with hyper-laplacian reflectance priors | |
Li et al. | An underwater image enhancement benchmark dataset and beyond | |
Golts et al. | Unsupervised single image dehazing using dark channel prior loss | |
US9842386B2 (en) | Image filtering based on image gradients | |
CN111079764B (en) | Low-illumination license plate image recognition method and device based on deep learning | |
Wang et al. | Joint iterative color correction and dehazing for underwater image enhancement | |
Yang et al. | Underwater image enhancement using scene depth-based adaptive background light estimation and dark channel prior algorithms | |
Vazquez-Corral et al. | A fast image dehazing method that does not introduce color artifacts | |
CN109214996A (en) | A kind of image processing method and device | |
Huang et al. | Color correction and restoration based on multi-scale recursive network for underwater optical image | |
CN113284061A (en) | Underwater image enhancement method based on gradient network | |
Purohit et al. | Multilevel weighted enhancement for underwater image dehazing | |
Zhou et al. | Underwater image enhancement via two-level wavelet decomposition maximum brightness color restoration and edge refinement histogram stretching | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
Mishra et al. | Underwater image enhancement using multiscale decomposition and gamma correction | |
Saleem et al. | A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset | |
Song et al. | Dual-model: Revised imaging network and visual perception correction for underwater image enhancement | |
Hong et al. | Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches | |
Song et al. | From shallow sea to deep sea: research progress in underwater image restoration | |
Prasath et al. | Distance-oriented cuckoo search enabled optimal histogram for underwater image enhancement: a novel quality metric analysis | |
Chang et al. | A self-adaptive single underwater image restoration algorithm for improving graphic quality | |
Moran et al. | MTNet: a multi-task cascaded network for underwater image enhancement | |
Liang et al. | PIE: Physics-Inspired Low-Light Enhancement | |
CN114926353B (en) | Underwater image restoration method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |