CN114926353A - Underwater image restoration method, device, equipment and storage medium - Google Patents

Underwater image restoration method, device, equipment and storage medium Download PDF

Info

Publication number
CN114926353A
CN114926353A CN202210413325.7A CN202210413325A CN114926353A CN 114926353 A CN114926353 A CN 114926353A CN 202210413325 A CN202210413325 A CN 202210413325A CN 114926353 A CN114926353 A CN 114926353A
Authority
CN
China
Prior art keywords
scene depth
image
value
underwater
underwater image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210413325.7A
Other languages
Chinese (zh)
Other versions
CN114926353B (en
Inventor
苗建明
张文睿
孙兴宇
邓侃侃
仝懿聪
王燕云
刘文超
郑若晗
龚喜
彭超
刘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Southern Marine Science and Engineering Guangdong Laboratory Zhuhai
Original Assignee
Sun Yat Sen University
Southern Marine Science and Engineering Guangdong Laboratory Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University, Southern Marine Science and Engineering Guangdong Laboratory Zhuhai filed Critical Sun Yat Sen University
Priority to CN202210413325.7A priority Critical patent/CN114926353B/en
Publication of CN114926353A publication Critical patent/CN114926353A/en
Application granted granted Critical
Publication of CN114926353B publication Critical patent/CN114926353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/536Depth or shape recovery from perspective effects, e.g. by using vanishing points
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention discloses an underwater image restoration method, device, equipment and storage medium, wherein a background light value is obtained by acquiring a pixel value of an image block in an original underwater image; meanwhile, a scene depth estimation model is built, scene depth is obtained, relevant parameters such as a backscattering component pre-estimation value, a backscattering component value and direct component transmissivity of the original underwater image are estimated based on the scene depth, the obtained background light value, backscattering component value and direct component transmissivity are substituted into the built physical imaging model for inversion degradation, and the first underwater restored image is obtained. Compared with the prior art, the technical scheme of the invention can realize the restoration processing of the underwater image by acquiring the scene depth of the underwater image, pre-estimating the related parameters of the underwater image and carrying out inversion degradation on the constructed physical imaging model, thereby reducing the dependence on manpower and improving the efficiency and the precision of the restoration of the underwater image.

Description

Underwater image restoration method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an underwater image restoration method, an apparatus, and a storage medium.
Background
In an underwater working scene, due to the complexity of water quality, the phenomena of refraction, scattering and the like of light rays in underwater transmission are caused, and an underwater image is easy to generate serious blurring. Because an original underwater clear image cannot be obtained and a fuzzy function which causes the underwater image to be degraded cannot be accurately measured, the application of an underwater image restoration technology is always limited, and the development of technologies such as underwater detection and the like is also indirectly limited.
Most of the existing image restoration technologies perform image modeling according to an image degradation model, and a certain cost function needs to be designed artificially to optimize an inverse problem, so that an original image and a fuzzy function are estimated. If the underwater imaging physical model is introduced into the variation energy model, a data item and a smooth item based on underwater features are designed, the problem of minimizing the extreme value of an energy equation is solved rapidly by skillfully using auxiliary variables and through norm constraint, and clear underwater image contents are recovered. Therefore, further research and exploration are urgently needed at present, and a physical model suitable for underwater image restoration aims to reduce artificial dependency of image restoration effects and improve efficiency and accuracy of underwater image restoration.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method, the device and the storage medium for restoring the underwater image are provided, the scene depth of the underwater image is obtained, the related parameters of the underwater image are estimated, inversion degradation is carried out on the basis of the constructed physical imaging model, restoration processing of the original underwater image is realized, dependence on manpower is reduced, and efficiency and precision of restoring the underwater image are improved.
In order to solve the above technical problem, the present invention provides an underwater image restoration method, including:
acquiring an original underwater image, carrying out image block division on the original underwater image, and obtaining a backlight value of the original underwater image based on pixel values of the divided image blocks;
constructing a scene depth estimation model to enable the scene depth estimation model to perform scene depth estimation on the original underwater image and output the scene depth;
performing pixel point screening on a scene depth map corresponding to the scene depth, calculating a backscattering component estimated value according to a channel value corresponding to each screened pixel point, and calculating a backscattering component value according to the backscattering component estimated value, the scene depth and the background light value;
acquiring a median of a normalized residual energy ratio in the original underwater image, taking the median as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
and constructing a physical imaging model, substituting the background light value, the backscattering component value and the direct component transmissivity into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
As a possible implementation manner, the constructing a scene depth prediction model to perform scene depth prediction on the original underwater image by the scene depth prediction model, and outputting the scene depth specifically includes:
acquiring pixel values of the original underwater image in each channel in an RGB color space, and calculating and obtaining a brightness value and a saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth;
constructing a second scene depth pre-estimation model, and inputting the pixel value into the second scene depth pre-estimation model so as to enable the second scene depth pre-estimation model to output a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
As a possible implementation manner, pixel point screening is performed on the scene depth map corresponding to the scene depth, and according to the channel value corresponding to each screened pixel point, a backscatter component pre-estimated value is calculated, specifically:
performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing the scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene-to-depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening corresponding pixel points in each subinterval, acquiring channel values corresponding to the screened pixel points, and obtaining a backscattering component estimated value.
As a possible implementation manner, the segmenting the original underwater image into tiles, and obtaining the backlight value of the original underwater image based on the pixel values of the segmented tiles specifically include:
dividing the original underwater image into a first preset number of image blocks, calculating a pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all the image blocks, performing iterative division on the first image block until the size of the divided second image block meets a preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as a background light value of the original underwater image.
As a possible implementation manner, after the obtaining the first underwater restoration image, the method further includes:
dividing the first underwater restoration image into a plurality of local sub-areas, and performing contrast-limiting equalization processing on each local sub-area so that the vertical direction of each local sub-area meets a preset contrast threshold value to obtain a second underwater restoration image;
and calculating adjustment gains corresponding to pixel values of RGB channels in the second underwater restoration image according to a white balance method, and adjusting the pixel values of the second underwater restoration image based on the adjustment gains to obtain a final underwater restoration image.
An embodiment of the present invention further provides an underwater image recovery apparatus, including: the device comprises a background light value acquisition module, a scene depth acquisition module, a backscattering component value acquisition module, a direct component transmittance acquisition module and an underwater restored image output module;
the background light value acquisition module is used for acquiring an original underwater image, performing block division on the original underwater image, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
the scene depth acquisition module is used for constructing a scene depth estimation model so as to enable the scene depth estimation model to carry out scene depth estimation on the original underwater image and output scene depth;
the backscattering component value acquisition module is used for screening pixel points of a scene depth map corresponding to the scene depth, calculating a backscattering component estimated value according to channel values corresponding to the screened pixel points, and calculating a backscattering component value according to the backscattering component estimated value, the scene depth and the background light value;
the direct component transmittance acquisition module is used for acquiring a median of a normalized residual energy ratio in the original underwater image, taking the median as a attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
the underwater restoration image output module is used for constructing a physical imaging model, substituting the background light value, the backscattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater restoration image.
As a possible implementation manner, the scene depth obtaining module is configured to construct a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs the scene depth, specifically:
acquiring pixel values of the original underwater image in each channel in an RGB color space, and calculating and obtaining a brightness value and a saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth;
constructing a second scene depth pre-estimation model, and inputting the pixel value into the second scene depth pre-estimation model so as to enable the second scene depth pre-estimation model to output a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
As a possible implementation manner, the backscatter component value obtaining module is configured to perform pixel point screening on a scene depth map corresponding to the scene depth, and calculate a backscatter component estimated value according to a channel value corresponding to each screened pixel point, specifically:
performing dimension conversion on the scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing the scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene-to-depth sequence number;
and carrying out subinterval division on the scene depth ordered sequence, screening corresponding pixel points in each subinterval, acquiring channel values corresponding to the screened pixel points, and obtaining a backscattering component estimated value.
As a possible implementation manner, the backlight value obtaining module is configured to perform tile division on the original underwater image, and obtain the backlight value of the original underwater image based on pixel values of the divided tiles, specifically:
dividing the original underwater image into a first preset number of image blocks, calculating a pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all the image blocks, performing iterative division on the first image block until the size of the divided second image block meets a preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as a background light value of the original underwater image.
As a possible implementation manner, an underwater image restoration device provided in an embodiment of the present invention further includes: an underwater recovery image processing module;
the underwater restoration processing module is configured to divide the first underwater restoration image into a plurality of local sub-regions, and perform contrast-limiting equalization processing on each local sub-region, so that a histogram of each local sub-region meets a preset contrast threshold, and obtain a second underwater restoration image;
and the underwater restoration processing module is used for calculating an adjustment gain corresponding to each pixel value of RGB channels in the second underwater restoration image according to a white balance method, and adjusting the pixel value of the second underwater restoration image based on the adjustment gain to obtain a final underwater restoration image.
An embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements the underwater image restoration method according to any one of the above items when executing the computer program.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, where when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the underwater image restoration method according to any one of the above mentioned methods.
Compared with the prior art, the underwater image restoration method, the device and the storage medium provided by the embodiment of the invention have the following beneficial effects:
the method comprises the steps of obtaining a background light value of an original underwater image, constructing a scene depth estimation model to obtain the scene depth of the original underwater image, estimating relevant parameters such as a backscattering component estimated value, a backscattering component value and direct component transmissivity of the original underwater image based on the scene depth, substituting the obtained background light value, backscattering component value and direct component transmissivity into the constructed physical imaging model to perform inversion degradation, and obtaining a first underwater restored image. Compared with the prior art, the technical scheme provided by the invention has the advantages that the scene depth with higher accuracy is obtained, the backscattering component pre-estimated value, the backscattering component value, the direct component transmittance and other related parameters of the original underwater image are estimated based on the scene depth, the parameter estimation process is not required to be manually designed with constraints, punishment items and the like, the operation process is simple, meanwhile, the constructed physical imaging model is directly subjected to model inversion degradation based on the obtained related parameters of the original underwater image, the calculated amount is small, and the obtaining efficiency and the precision of the underwater restored image are improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an underwater image restoration method provided by the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of an underwater image restoration device provided by the present invention;
FIG. 3 is a schematic diagram of a first scene depth estimation based on a first scene depth estimation model S-I under an unmanned light source according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first scene depth estimation based on a first scene depth estimation model S-I under an artificial light source according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a second scene depth estimation based on a second scene depth estimation model BG-R estimation according to an embodiment of the present invention;
fig. 6 is a schematic view of scene depth fusion obtained based on fusion of a linear weighted fusion model according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Example 1
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of an underwater image restoration method provided by the present invention, as shown in fig. 1, the method includes steps 101 to 105, specifically as follows:
step 101: acquiring an original underwater image, carrying out image block division on the original underwater image, and obtaining a backlight value of the original underwater image based on pixel values of the divided image blocks.
In one embodiment, the background light of the original underwater image is calculated using a quadtree hierarchical search method. The method comprises the steps of uniformly dividing an original underwater image into a first preset number of image blocks, calculating a pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all the image blocks, carrying out iterative division on the first image block until the size of the divided second image block meets a preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as a background light value of the original underwater image.
As an example of this embodiment, an original underwater image is uniformly divided into four image blocks, a pixel value mean value of an RGB three channel of each image block is calculated, and an image block with the largest pixel value mean value is selected from the four image blocks as a first image block; uniformly dividing the first image block into four image blocks, performing iterative update processing on the first image block based on the steps until the divided first image block meets the set threshold value, marking as a second image block, ending the division search operation, and taking the finally obtained pixel value mean value of the RGB three channels corresponding to the second image block as the background light value of the original underwater image
Figure BDA0003603153750000091
Wherein the set threshold is 50.
Step 102: and constructing a scene depth estimation model to enable the scene depth estimation model to perform scene depth estimation on the original underwater image and output the scene depth.
In one embodiment, the pixel values of the original underwater image in each channel of the RGB color space are obtained, and the brightness value and the saturation of the original underwater image in the HSI color space are calculated and obtained according to a preset color space conversion formula. Specifically, an image of the original underwater image in the RGB color space is converted into an image of the original underwater image in the HSI color space, where HSI refers to a model of a digital image, and color is perceived by three basic feature quantities, namely hue h (hue), saturation s (saturation), and brightness l (intensity). The conversion formula of the luminance L and the saturation S is as follows:
R,G,B∈[0,255];
(R′,G′,B′)=(R,G,B)/255.0;
Figure BDA0003603153750000092
Figure BDA0003603153750000093
in the formula, R, G, B are pixel values of each channel in RGB color space, and R ', G ', B ' are normalized pixel values of R, G, B. L is expressed as a luminance value in the HSI color space, and S is expressed as a saturation value in the HSI color space.
In one embodiment, a first scene depth prediction model S-I is constructed, the brightness value and the saturation are input into the first scene depth prediction model, so that the first scene depth prediction model outputs a first scene depth D 1 The constructed first scene depth estimation model S-I is as follows:
D 1 (x)=|L(x)-S(x)|。
in one embodiment, the acquired original underwater image may be an original underwater image under an artificial light source or an original underwater image without an artificial light source. Referring to fig. 3 and 4, fig. 3 is a schematic diagram of a first scene depth estimation based on a first scene depth estimation model S-I estimation without artificial light source; FIG. 4 is a schematic diagram of a first scene depth estimation based on a first scene depth estimation model S-I under an artificial light source. As shown in fig. 3, it can be seen that, in the absence of a light source, the first scene depth map estimated based on the first scene depth estimation model better conforms to the scene depth perception under human vision, an area closer to the camera in the real scene corresponds to the estimated low pixel value image block at the bottom in the first scene depth map, and an area farther from the camera corresponds to the estimated high pixel value image block at the top in the first scene depth map. Under the condition of artificial light source irradiation, as shown in fig. 4, the first scene depth schematic diagram estimated based on the first scene depth estimation model can inhibit the influence of light source irradiation to a certain extent, and the accuracy of scene depth estimation is improved.
In the embodiment, the first scene depth estimation model S-I is constructed based on the saturation and brightness of the original underwater image and the scene depth change rule, so that the accuracy and the universality of scene depth estimation are improved.
In one embodiment, a second scene depth pre-estimation model BG-R is constructed based on an image of an original underwater image in RGB color space, the pixel values are input into the second scene depth pre-estimation model, and the second scene depth pre-estimation model outputs a second scene depth D 2 And the constructed second scene depth prediction model BG-R is as follows:
Figure BDA0003603153750000101
in the formula, M BG Is the maximum pixel value, V, in the blue-green channel in the RGB color space R As the red channel pixel value in RGB color space, c 1 、c 2 、c 3 Are respectively a constant value of weight, wherein c 1 、c 2 、c 3 The determination of the parameter values is based on the parameter values proposed by the Song method, 0.53214829, 0.51309827 and-0.91066194, respectively.
Figure BDA0003603153750000102
Function representation optimization with minimum value filtering, Ω r (x) Is an r x r neighborhood centered at x, with r taking the value 5.
Referring to fig. 5, fig. 5 is a schematic diagram of the second scene depth estimated based on the second scene depth estimation model BG-R, and as shown in fig. 5, for an underwater image with an artificial light source, the influence of light source irradiation can be effectively reduced through the second scene depth estimation model BG-R, and the accuracy of the second scene depth estimation is improved. The red channel of the image block pixel point corresponding to the close-range area greatly influenced by the light source in the RGB color space is compensated, so that the pixel value of the red channel is not obviously attenuated relative to the pixel value of the blue-green channel, namely the difference value of the red channel component and the blue-green channel component is smaller, and the second scene depth under the irradiation of the artificial light source at the close range can be better reflected. For a distant view area which is slightly influenced by a light source, visible light under different wavelengths is normally attenuated, red visible light is most seriously attenuated due to the longest wavelength, and a blue-green channel is relatively slowly attenuated, so that the difference value between the maximum value of the pixel value of the blue-green channel and the pixel value of the red channel is increased along with the distance of the depth of field, and the accurate estimation of the distant view depth is realized.
In an embodiment, a linear weighted fusion model is constructed, so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth. Wherein, the fusion formula is as follows:
D(x,y)=αD 1 (x,y)+βD 2 (x,y);
in the formula, α and β are weight values, and α + β ═ 1 is satisfied.
In the embodiment, the first scene depth estimation model S-I and the second scene depth estimation model BG-R are fused by using the linear weighted sum model, so that the accuracy and the universality of scene estimation are further improved, and the provided models are linear models including the fusion mode, so that the calculation efficiency of scene depth estimation is improved.
In an embodiment, the depth of field obtained after the first scene depth and the second scene depth are fused is subjected to a guided filtering optimization process. In the embodiment, the depth of the scene obtained after fusion is subjected to guiding filtering processing, so that the depth of the underwater scene can be more accurately reflected. The formula for guided filtering is as follows:
D final (x,y)=Guidedfilter(I gray ,p,r,eps);
in the formula I gray Is a gray scale image of an original underwater image, p is a fused scene depth D, r is a local window radius, the value is 17, eps is regularizationAnd the value of the parameter is 0.001.
Referring to fig. 6, fig. 6 is a schematic view of scene depth fusion obtained by fusion based on a linear weighted fusion model. As shown in fig. 6, the estimated scene depth after the first scene depth estimated by the first scene depth estimation model S-I and the second scene depth estimated by the second scene depth estimation model BG-R are fused can be well represented regardless of the near view region greatly influenced by the light source or the far view region slightly influenced by the light source.
Step 103: and carrying out pixel point screening on the scene depth map corresponding to the scene depth, calculating a backscattering component estimated value according to a channel value corresponding to each screened pixel point, and calculating a backscattering component value according to the backscattering component estimated value, the scene depth and the backlight value.
In one embodiment, dimension conversion is performed on a scene depth map corresponding to the scene depth, a one-dimensional vector of the scene depth map is obtained, and scene depth values in the one-dimensional vector are sequenced according to a preset sequence, so that a scene-to-depth sequence is obtained. Specifically, since the scene depth acquired in step 102 is two-dimensional (w, h), the two-dimensional scene depth is converted into a single-dimensional (w × h,1) scene depth, and the two-dimensional scene depth is arranged according to the order of the scene depth values from large to small.
In one embodiment, the scene depth ordered sequence is subjected to subinterval division, corresponding pixel points in each subinterval are screened, channel values corresponding to the screened pixel points are obtained, and a backscatter component estimated value is obtained. Specifically, the obtained depth sequence of the scene is uniformly divided into 10 sub-intervals, pixel points corresponding to GRB three channels corresponding to each sub-interval are obtained, the pixel points corresponding to the GRB three channels are screened, the pixel points corresponding to the first 1% of RGB three channels are screened in each sub-interval, and each channel value of the screened pixel points is used as a backscattering component pre-evaluation value, wherein,
in one embodiment, the estimation is based on the obtained backscatter component estimate, the backlight value obtained in step 101, and the backlight value obtained in step 102Accurately estimating the backscattering component B by using a nonlinear least square fitting method according to the acquired scene depth c Because the backscattering component has uncertainty, an error term is added when the backscattering component value is calculated by using the nonlinear least square fitting method in the embodiment
Figure BDA0003603153750000131
Wherein the nonlinear least squares fitting function is as follows:
Figure BDA0003603153750000132
in the formula, beta b 、β d 、J ′c Are unknown scalar parameters.
In the embodiment, a non-linear least square method fitting method is performed by using the backscattering component estimated value, the accurately estimated scene depth and the background light, so that the backscattering component value is more accurately obtained, the accuracy of the direct component in the subsequent physical imaging model is indirectly ensured, and the definition of the restored image obtained by reversely solving the subsequent physical imaging model is further improved.
Step 104: and acquiring a median of the normalized residual energy ratio in the original underwater image, taking the median as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance.
In one embodiment, the wavelength value of light in RGB three channels corresponding to an original underwater image is obtained; based on a normalized residual energy ratio Nrer (lambda) defined in a primary ocean water quality standard, selecting a corresponding normalized residual energy ratio according to a wavelength value of light, and selecting a median value of Nrer (lambda) in each channel of RGB as an attenuation parameter of direct component transmittance. Wherein, the normalized residual energy ratio Nrer (λ) defined in the primary ocean water quality standard is as follows:
Figure BDA0003603153750000133
in an embodiment, the obtained attenuation parameter of the direct component transmittance and the scene depth obtained in step 102 are introduced into a transmittance formula to obtain a transmittance map, and the direct component transmittance is obtained based on the transmittance map, and meanwhile, optimization processing is performed by using the direct component transmittance obtained by the guided filtering, where the transmittance formula is as follows:
t c (x)=Nrer(λ) d(x) =e -β(λ)d(x)
in the formula, t c (x) Is a transmittance map.
Step 105: and constructing a physical imaging model, substituting the background light value, the backscattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
In one embodiment, a physical imaging model is constructed, the background light value, the backscattering component value and the direct component transmittance are substituted into the physical imaging model, and the physical imaging model is subjected to inverse degradation, wherein the formula of the physical imaging model is as follows:
I c (x)=D c (x)+B c (x),c∈{R,G,B};
wherein c represents a color channel; i is c Expressed as the original underwater image; d c Expressed as a direct attenuation component; b c Denoted as the backscatter component.
In one embodiment, component D is directly attenuated c Can be expressed as:
Figure BDA0003603153750000141
in the formula, J c (x) The underwater image is a clear underwater image without color cast after restoration;
Figure BDA0003603153750000142
is the transmittance of the direct component.
In one embodiment, for the backscatter component B c The light intensity expression can be written as:
Figure BDA0003603153750000143
in the formula (I), the compound is shown in the specification,
Figure BDA0003603153750000144
is the backlight value;
Figure BDA0003603153750000145
is the transmission of the backscattered component.
In one embodiment, the transmittance for the backscatter component
Figure BDA0003603153750000151
And transmittance of direct component
Figure BDA0003603153750000152
The existence according to Beer-Lambert law is defined as follows:
t c (x)=Nrer(λ) d(x) =e -β(λ)d(x)
in the formula, t c (x) In the transmittance graph, Nrer (λ) is a normalized residual energy ratio, the magnitude of which is determined by the wavelength of light, and represents the attenuation of light energy per unit distance of light transmitted through water. β (λ) is the attenuation coefficient, including the attenuation caused by absorption and scattering effects; d (x) is the scene depth, i.e., the distance from the light reflected by the target object to the camera's receiving opening.
In this embodiment, in order to construct a more realistic underwater physical imaging model, different transmittances are used for the direct component and the backscatter component to perform calculation.
In one embodiment, based on inversion degradation of the constructed physical imaging model, the final expression of the physical imaging model is obtained as follows:
Figure BDA0003603153750000153
in the formula, beta d (λ) is the attenuation coefficient of the direct component; beta is a b (λ) is the attenuation coefficient of the backscatter component.
In one embodiment, the first underwater restored image J is obtained c As follows:
Figure BDA0003603153750000154
in an embodiment, after the first underwater recovery image is obtained, the contrast of the first underwater recovery image is further improved by using a histogram equalization with limited contrast.
In an embodiment, the first underwater restoration image is divided into a plurality of local sub-regions by using a contrast-limited histogram equalization method, and contrast-limited equalization processing is performed on each local sub-region, so that the histogram of each local sub-region meets a preset contrast threshold, and a second underwater restoration image is obtained. Specifically, the image is divided into 8 non-overlapping local sub-regions, wherein the size of the divided local sub-regions is M × N; and respectively carrying out contrast limiting equalization processing on each non-overlapping local sub-region, wherein the contrast limiting equalization processing comprises the following steps: generating a histogram according to the pixel value corresponding to each local subregion, counting the histograms of each local subregion, and setting a contrast threshold for each local subregion, wherein the set contrast threshold is 2; and cutting the histogram of each local subregion according to the contrast threshold, counting the pixel values exceeding the contrast threshold in the local subregions, redistributing the pixel values exceeding the threshold in the histogram of each local subregion, performing iterative processing on the steps until the finally redistributed histogram meets the threshold condition, acquiring the central pixel point in the finally redistributed histogram, and performing gray value reconstruction on the central pixel point obtained by each local subregion by using a bilinear interpolation method to obtain a second underwater restored image.
In an embodiment, after the second underwater restoration image is obtained, the image color of the second underwater restoration image is further optimized according to a white balance method.
In an embodiment, according to a white balance method, an adjustment gain corresponding to each pixel value of RGB channels in the second underwater recovery image is calculated, and based on the adjustment gain, the pixel value of the second underwater recovery image is adjusted to obtain a final underwater recovery image. Specifically, the image of the second underwater recovery image in the RGB color space is converted into the second underwater recovery image in the YC color space r C b Image of color space, wherein, at YC r C b In color space, Y represents a luminance signal, C r Representing the red chrominance component, C b Representing the blue chrominance component. After the conversion of the color space is completed, the red chrominance component C is selected and calculated respectively based on the reference white light point r And a blue chrominance component C b Corresponding mean value M r 、M b Sum variance D r 、D b . Wherein, the variance formula is as follows:
D r =∑ i,j (|C r (i,j)-M r |)/N;
D b =∑ i,j (|C b (i,j)-M b |)/N;
wherein N is the image J c The total number of pixels.
In one embodiment, the red chrominance component C is calculated based on r And a blue chrominance component C b Corresponding mean value M r 、M b Sum variance D r 、D b Calculating a red chrominance component C r And a blue chrominance component C b Corresponding near white region, in which the red chrominance component C r And a blue chrominance component C b The expression of the corresponding near white region is as follows:
|C r (i,j)-(1.5×M r +D r ×sign(M r ))|<1.5×D r
|C b (i,j)-(M b +D b ×sign(M b ))|<1.5×D b
in one embodiment, a brightness matrix RL based on a reference white light spot and a pixel point distinguishing condition are set, the pixel points in a near white region are distinguished, if the pixel points existing in the near white region accord with the pixel point distinguishing condition, all the pixel points meeting the pixel point distinguishing condition are set as the reference white light spot, the brightness components Y of all the pixel points meeting the pixel point distinguishing condition are obtained, and the brightness components Y are substituted into the corresponding positions of the brightness matrix RL; if the existing pixel points in the near white area do not accord with the pixel point distinguishing condition, the brightness component of the corresponding position of the pixel point which does not accord with the pixel point distinguishing condition is set to be 0 in the brightness matrix RL.
In an embodiment, after the setting of the brightness matrix RL is completed, the brightness values in the brightness matrix RL are sorted from large to small, the first 10% of the brightness values in the brightness matrix RL are selected, and the minimum brightness value of the selected first 10% of the brightness values is defined as L min Readjusting the brightness matrix RL if RL (i, j)<L min RL (i, j) ═ 0; otherwise RL (i, j) is 1; and obtains the adjusted luminance matrix.
In an embodiment, the RGB three-channel pixel values R, G, and B corresponding to the second underwater recovery image are multiplied by the adjusted luminance matrix, respectively, to obtain the new RGB three-channel pixel value R corresponding to the second underwater recovery image 2 ,G 2 ,B 2 And separately calculate R 2 ,G 2 ,B 2 Corresponding average value R mean ,G mean ,B mean While calculating the maximum luminance component Y max
In one embodiment, the average R corresponding to the pixel values based on the maximum luminance component and the RGB three channels mean ,G mean ,B mean Calculating the adjustment gain R of the RGB three channels corresponding to the second underwater recovery image gain ,G gain ,B gain The formula is as follows:
R gain =Ymean max
G gain =Ymean max
B gain =Ymean max
in one embodiment, the gain R is adjusted based on the RGB three channels corresponding to the second underwater restoration image gain ,G gain ,B gain And adjusting the original underwater image to obtain an adjusted final underwater restored image. The adjustment formula is as follows:
R final =R×R gain
G final =G×G gain
B final =B×B gain
in the formula, R final 、G final 、B final Expressed as the pixel values of each channel in the RGB color space after the white balance processing of the image.
In summary, in the underwater image restoration method provided by this embodiment, the backlight is obtained by calculation using a four-way search method; a linear weighting model is constructed to fuse the scene depths obtained by the two scene depth estimation models, and guide filtering is utilized to optimize so as to obtain the accurate scene depth, the process is simple and quick to calculate, and the estimation result is accurate; by utilizing a nonlinear least square fitting method, a backscattering component estimated value, an accurately estimated scene depth and a background light value are introduced at the same time, and an accurate backscattering component value is obtained by fitting, so that the definition of a restored image is greatly improved; calculating a direct component transmittance using the scene depth map and the normalized residual energy ratio; and based on the obtained parameters, performing inversion degradation on the constructed physical imaging model, reversely pushing a physical imaging model formula to obtain a first underwater restored image, and finally further improving the image brightness and optimizing the image color by utilizing a white balance and contrast-limiting histogram equalization method so that the obtained final underwater restored image is clearer.
Example 2
Referring to fig. 2, fig. 2 is a schematic flowchart of an embodiment of an underwater image restoration apparatus provided by the present invention, and as shown in fig. 2, the apparatus includes a backlight value obtaining module 201, a scene depth obtaining module 202, a backscattering component value obtaining module 203, a direct component transmittance obtaining module 204, and an underwater restored image output module 205, which are as follows:
the backlight value obtaining module 201 is configured to obtain an original underwater image, perform block division on the original underwater image, and obtain a backlight value of the original underwater image based on pixel values of the divided blocks.
The scene depth obtaining module 202 is configured to construct a scene depth estimation model, so that the scene depth estimation model performs scene depth estimation on the original underwater image, and outputs the scene depth.
The backscatter component value obtaining module 203 performs pixel point screening on a scene depth map corresponding to the scene depth, calculates a backscatter component estimated value according to a channel value corresponding to each pixel point screened, and calculates a backscatter component value according to the backscatter component estimated value, the scene depth, and the backlight value.
The direct component transmittance obtaining module 204 is configured to obtain a median of a normalized residual energy ratio in the original underwater image, use the median as a attenuation parameter of the direct component transmittance, and input the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance.
The underwater restoration image output module 205 is configured to construct a physical imaging model, substitute the background light value, the backscattering component value, and the direct component transmittance into the physical imaging model, and perform inversion degradation on the physical imaging model to obtain a first underwater restoration image.
In an embodiment, the scene depth obtaining module 202 is configured to construct a scene depth prediction model, so that the scene depth prediction model performs scene depth prediction on the original underwater image, and outputs the scene depth. Specifically, pixel values of the original underwater image in each channel of an RGB color space are obtained, and according to a preset color space conversion formula, a brightness value and a saturation of the original underwater image in an HSI color space are calculated and obtained; constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth; constructing a second scene depth pre-estimation model, and inputting the pixel value into the second scene depth pre-estimation model so that the second scene depth pre-estimation model outputs a second scene depth; and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
In an embodiment, the backscatter component value obtaining module 203 is configured to perform pixel point screening on a scene depth map corresponding to the scene depth, calculate a backscatter component estimated value according to a channel value corresponding to each screened pixel point, specifically, perform dimension conversion on the scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sort the scene depth values in the one-dimensional vector according to a preset order to obtain a scene-to-depth ordered sequence; and carrying out subinterval division on the scene depth ordered sequence, screening corresponding pixel points in each subinterval, acquiring channel values corresponding to the screened pixel points, and obtaining a backscattering component estimated value.
In an embodiment, the backlight value obtaining module 201 is configured to divide blocks of the original underwater image, and obtain the backlight value of the original underwater image based on pixel values of the divided blocks, specifically, divide the original underwater image into a first preset number of blocks, calculate a pixel mean value of each block, select a first block with a largest pixel mean value among all the blocks, perform iterative division on the first block until a size of a second block satisfies a preset size, calculate a pixel mean value of the second block, and use the mean value of the second block as the backlight value of the original underwater image.
In one embodiment, the underwater image restoration device further comprises an underwater restoration image processing module; the underwater restoration processing module is configured to divide the first underwater restoration image into a plurality of local sub-regions, and perform contrast-limiting equalization processing on each local sub-region, so that a histogram of each local sub-region meets a preset contrast threshold, and obtain a second underwater restoration image; and calculating an adjustment gain corresponding to each channel pixel value of RGB in the second underwater recovery image according to a white balance method, and adjusting the pixel value of the second underwater recovery image based on the adjustment gain to obtain a final underwater recovery image.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
It should be noted that the above embodiment of the underwater image restoration device is only schematic, where the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical units, that is, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
On the basis of the foregoing embodiment of the underwater image restoration method, another embodiment of the present invention provides an underwater image restoration terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the underwater image restoration method according to any one of the embodiments of the present invention is implemented.
Illustratively, the computer program may be partitioned in this embodiment into one or more modules that are stored in the memory and executed by the processor to implement the invention. The one or more modules may be a series of instruction segments of a computer program capable of performing specific functions, the instruction segments being used for describing the execution process of the computer program in the underwater image restoration terminal device.
The underwater image restoration terminal device can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing devices. The underwater image restoration terminal device can comprise, but is not limited to, a processor and a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc., the processor is a control center of the underwater image restoration terminal device, and various interfaces and lines are used for connecting various parts of the whole underwater image restoration terminal device.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the underwater image restoration terminal device by operating or executing the computer program and/or module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash memory Card (FlashCard), at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
On the basis of the foregoing embodiment of the underwater image restoration method, another embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, and when the computer program runs, an apparatus on which the storage medium is located is controlled to execute the underwater image restoration method according to any one of the embodiments of the present invention.
In this embodiment, the storage medium is a computer-readable storage medium, and the computer program includes computer program code, which may be in source code form, object code form, an executable file or some intermediate form, and so on. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
In summary, the method, the device and the storage medium for restoring the underwater image of the present invention obtain the background light value by obtaining the pixel value of the image block in the original underwater image; meanwhile, a scene depth estimation model is built, scene depth is obtained, relevant parameters such as a backscattering component pre-estimation value, a backscattering component value and direct component transmissivity of the original underwater image are estimated based on the scene depth, the obtained background light value, backscattering component value and direct component transmissivity are substituted into the built physical imaging model for inversion degradation, and the first underwater restored image is obtained. Compared with the prior art, the technical scheme of the invention can realize the restoration processing of the underwater image by acquiring the scene depth of the underwater image, pre-estimating the related parameters of the underwater image and carrying out inversion degradation on the constructed physical imaging model, thereby reducing the dependence on manpower and improving the efficiency and the precision of the restoration of the underwater image.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, many modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.

Claims (10)

1. An underwater image restoration method, comprising:
acquiring an original underwater image, carrying out image block division on the original underwater image, and obtaining a backlight value of the original underwater image based on pixel values of the divided image blocks;
constructing a scene depth estimation model to enable the scene depth estimation model to perform scene depth estimation on the original underwater image and output scene depth;
performing pixel point screening on a scene depth map corresponding to the scene depth, calculating a backscattering component estimated value according to a channel value corresponding to each screened pixel point, and calculating a backscattering component value according to the backscattering component estimated value, the scene depth and the background light value;
acquiring a median of a normalized residual energy ratio in the original underwater image, taking the median as an attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
and constructing a physical imaging model, substituting the background light value, the backscattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater recovery image.
2. The method for restoring the underwater image according to claim 1, wherein the scene depth estimation model is constructed so that the scene depth estimation model performs scene depth estimation on the original underwater image, and the scene depth is output, specifically:
acquiring pixel values of the original underwater image in each channel in an RGB color space, and calculating and obtaining a brightness value and a saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth prediction model, and inputting the brightness value and the saturation into the first scene depth prediction model so that the first scene depth prediction model outputs a first scene depth;
constructing a second scene depth pre-estimation model, and inputting the pixel value into the second scene depth pre-estimation model so that the second scene depth pre-estimation model outputs a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
3. The method for restoring an underwater image according to claim 1, wherein pixel points of the scene depth map corresponding to the scene depth are screened, and a backscatter component estimated value is calculated according to a channel value corresponding to each screened pixel point, specifically:
performing dimension conversion on the scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing the scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene-to-depth sequence number;
and carrying out subinterval division on the scene depth ordered sequence, screening corresponding pixel points in each subinterval, acquiring channel values corresponding to the screened pixel points, and obtaining a backscattering component estimated value.
4. The method for restoring an underwater image as claimed in claim 1, wherein the original underwater image is divided into blocks, and a backlight value of the original underwater image is obtained based on pixel values of the divided blocks, specifically:
dividing the original underwater image into a first preset number of image blocks, calculating a pixel mean value of each image block, selecting a first image block with the largest pixel mean value in all the image blocks, performing iterative division on the first image block until the size of the divided second image block meets a preset size, calculating the pixel mean value of the second image block, and taking the mean value of the second image block as a background light value of the original underwater image.
5. The method for underwater image restoration according to claim 1, wherein after obtaining the first underwater restored image, the method further comprises:
dividing the first underwater restoration image into a plurality of local sub-areas, and performing contrast-limiting equalization processing on each local sub-area so that the vertical direction of each local sub-area meets a preset contrast threshold value to obtain a second underwater restoration image;
and calculating adjustment gains corresponding to pixel values of RGB channels in the second underwater restoration image according to a white balance method, and adjusting the pixel values of the second underwater restoration image based on the adjustment gains to obtain a final underwater restoration image.
6. An underwater image restoration device, comprising: the device comprises a background light value acquisition module, a scene depth acquisition module, a backscattering component value acquisition module, a direct component transmittance acquisition module and an underwater restored image output module;
the background light value acquisition module is used for acquiring an original underwater image, performing block division on the original underwater image, and obtaining a background light value of the original underwater image based on pixel values of the divided blocks;
the scene depth acquisition module is used for constructing a scene depth estimation model so as to enable the scene depth estimation model to estimate the scene depth of the original underwater image and output the scene depth;
the backscattering component value acquisition module is used for screening pixel points of a scene depth map corresponding to the scene depth, calculating a backscattering component estimated value according to channel values corresponding to the screened pixel points, and calculating a backscattering component value according to the backscattering component estimated value, the scene depth and the background light value;
the direct component transmittance acquisition module is used for acquiring a median of a normalized residual energy ratio in the original underwater image, taking the median as a attenuation parameter of the direct component transmittance, and inputting the attenuation parameter and the scene depth into a preset transmittance formula to obtain the direct component transmittance;
the underwater restoration image output module is used for constructing a physical imaging model, substituting the background light value, the backscattering component value and the direct component transmittance into the physical imaging model, and performing inversion degradation on the physical imaging model to obtain a first underwater restoration image.
7. The underwater image restoration device according to claim 6, wherein the scene depth obtaining module is configured to construct a scene depth prediction model, so that the scene depth prediction model performs scene depth prediction on the original underwater image and outputs a scene depth, specifically:
acquiring pixel values of the original underwater image in each channel in an RGB color space, and calculating and obtaining a brightness value and a saturation of the original underwater image in an HSI color space according to a preset color space conversion formula;
constructing a first scene depth estimation model, and inputting the brightness value and the saturation into the first scene depth estimation model so that the first scene depth estimation model outputs a first scene depth;
constructing a second scene depth pre-estimation model, and inputting the pixel value into the second scene depth pre-estimation model so that the second scene depth pre-estimation model outputs a second scene depth;
and constructing a linear weighted fusion model so that the linear weighted fusion model fuses the first scene depth and the second scene depth to obtain the scene depth.
8. The underwater image restoration device according to claim 6, wherein the backscatter component value acquisition module is configured to perform pixel point screening on a scene depth map corresponding to the scene depth, and calculate a backscatter component estimated value according to a channel value corresponding to each screened pixel point, specifically:
performing dimension conversion on a scene depth map corresponding to the scene depth to obtain a one-dimensional vector of the scene depth map, and sequencing the scene depth values in the one-dimensional vector according to a preset sequence to obtain a scene-to-depth ordered sequence;
and carrying out subinterval division on the scene depth ordered sequence, screening corresponding pixel points in each subinterval, acquiring channel values corresponding to the screened pixel points, and obtaining a backscattering component estimated value.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the underwater image restoration method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium comprising a stored computer program, wherein the computer program when executed controls an apparatus in which the computer-readable storage medium is located to perform the underwater image restoration method according to any one of claims 1 to 5.
CN202210413325.7A 2022-04-19 2022-04-19 Underwater image restoration method, device, equipment and storage medium Active CN114926353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210413325.7A CN114926353B (en) 2022-04-19 2022-04-19 Underwater image restoration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210413325.7A CN114926353B (en) 2022-04-19 2022-04-19 Underwater image restoration method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114926353A true CN114926353A (en) 2022-08-19
CN114926353B CN114926353B (en) 2023-05-23

Family

ID=82807239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210413325.7A Active CN114926353B (en) 2022-04-19 2022-04-19 Underwater image restoration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114926353B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN108596857A (en) * 2018-05-09 2018-09-28 西安邮电大学 Single image to the fog method for intelligent driving
CN108876743A (en) * 2018-06-26 2018-11-23 中山大学 A kind of image rapid defogging method, system, terminal and storage medium
CN108921887A (en) * 2018-06-07 2018-11-30 上海海洋大学 Underwater scene depth map estimation method based on underwater light attenuation apriority
CN110335210A (en) * 2019-06-11 2019-10-15 长江勘测规划设计研究有限责任公司 Underwater image restoration method
CN111833258A (en) * 2019-04-19 2020-10-27 中国科学院沈阳自动化研究所 Image color correction method based on double-transmittance underwater imaging model
CN113888420A (en) * 2021-09-24 2022-01-04 同济大学 Underwater image restoration method and device based on correction model and storage medium
CN113989164A (en) * 2021-11-24 2022-01-28 河海大学常州校区 Underwater color image restoration method, system and storage medium
CN114119383A (en) * 2021-09-10 2022-03-01 大连海事大学 Underwater image restoration method based on multi-feature fusion

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN108596857A (en) * 2018-05-09 2018-09-28 西安邮电大学 Single image to the fog method for intelligent driving
CN108921887A (en) * 2018-06-07 2018-11-30 上海海洋大学 Underwater scene depth map estimation method based on underwater light attenuation apriority
CN108876743A (en) * 2018-06-26 2018-11-23 中山大学 A kind of image rapid defogging method, system, terminal and storage medium
CN111833258A (en) * 2019-04-19 2020-10-27 中国科学院沈阳自动化研究所 Image color correction method based on double-transmittance underwater imaging model
CN110335210A (en) * 2019-06-11 2019-10-15 长江勘测规划设计研究有限责任公司 Underwater image restoration method
CN114119383A (en) * 2021-09-10 2022-03-01 大连海事大学 Underwater image restoration method based on multi-feature fusion
CN113888420A (en) * 2021-09-24 2022-01-04 同济大学 Underwater image restoration method and device based on correction model and storage medium
CN113989164A (en) * 2021-11-24 2022-01-28 河海大学常州校区 Underwater color image restoration method, system and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEI SONG: "A Rapid Scene Depth Estimation Model Based on Underwater Light Attenuation Prior for Underwater Image Restoration", 《LECTURE NOTES IN COMPUTER SCIENCE》 *
赵琳: "《海洋环境下的计算机视觉技术》", 31 October 2015, 国防工业出版社 *
郭威: "应用于水下机器人的快速深海图像复原算法", 《光学学报》 *

Also Published As

Publication number Publication date
CN114926353B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Zhou et al. Underwater image restoration via backscatter pixel prior and color compensation
Bui et al. Single image dehazing using color ellipsoid prior
CN109145922B (en) Automatic cutout system
Zhou et al. Underwater camera: Improving visual perception via adaptive dark pixel prior and color correction
US8818082B2 (en) Classifying blur state of digital image pixels
Fan et al. Two-layer Gaussian process regression with example selection for image dehazing
CN114240989A (en) Image segmentation method and device, electronic equipment and computer storage medium
Zhou et al. Underwater image restoration via depth map and illumination estimation based on a single image
CN110378848B (en) Image defogging method based on derivative map fusion strategy
Yang et al. Underwater image enhancement using scene depth-based adaptive background light estimation and dark channel prior algorithms
CN110675334A (en) Image enhancement method and device
Barros et al. Single-shot underwater image restoration: A visual quality-aware method based on light propagation model
CN109214996A (en) A kind of image processing method and device
Zhou et al. Underwater image enhancement via two-level wavelet decomposition maximum brightness color restoration and edge refinement histogram stretching
Wang et al. An efficient method for image dehazing
Fayaz et al. Efficient underwater image restoration utilizing modified dark channel prior
Mishra et al. Underwater image enhancement using multiscale decomposition and gamma correction
Li et al. Underwater image filtering: methods, datasets and evaluation
CN114187515A (en) Image segmentation method and image segmentation device
Chang et al. A self-adaptive single underwater image restoration algorithm for improving graphic quality
Singh et al. A systematic review of the methodologies for the processing and enhancement of the underwater images
Huang et al. Image dehazing based on robust sparse representation
CN114926353B (en) Underwater image restoration method, device, equipment and storage medium
Chen et al. Candidate region acquisition optimization algorithm based on multi-granularity data enhancement
CN114255193A (en) Board card image enhancement method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant