CN115908178A - Underwater image restoration method based on dark channel prior - Google Patents

Underwater image restoration method based on dark channel prior Download PDF

Info

Publication number
CN115908178A
CN115908178A CN202211436297.7A CN202211436297A CN115908178A CN 115908178 A CN115908178 A CN 115908178A CN 202211436297 A CN202211436297 A CN 202211436297A CN 115908178 A CN115908178 A CN 115908178A
Authority
CN
China
Prior art keywords
image
pixel
point
dark channel
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211436297.7A
Other languages
Chinese (zh)
Inventor
毕胜
杨梦杰
付先平
刘晓凯
金国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202211436297.7A priority Critical patent/CN115908178A/en
Publication of CN115908178A publication Critical patent/CN115908178A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an underwater image restoration method based on dark channel prior, which mainly comprises the following steps: segmenting the original image by using an SLIC superpixel segmentation algorithm, and replacing a dark channel with an obtained superpixel segmentation block to fix a square filtering window; self-adaptive selection is carried out on the background light candidate points in the underwater image dark channel by utilizing the distribution characteristics of the aggregative property of the background light candidate points and the smooth characteristics of the peripheral neighborhood of the candidate points; calculating self-adaptive transmittance according to the input image and the background light value; solving a preliminary restored image based on the underwater image restoration model, the final backlight value and the self-adaptive transmission image; and performing color compensation on the preliminarily restored image in a Lab color space, and performing contrast and saturation adjustment on the image subjected to the color compensation by using a Gamma correction model. The method solves the problem of color distortion, improves the restoration quality of the underwater image, and has stronger scene adaptability.

Description

Underwater image restoration method based on dark channel prior
Technical Field
The invention relates to the technical field of image processing, in particular to an underwater image restoration method based on dark channel prior.
Background
About more than two thirds of the area on the earth is the ocean, and the ocean contains abnormally rich ocean resources, including ocean power resources, ocean mineral resources, ocean biological resources, ocean space resources and the like. With the development of economy, science, military, and the shortage of natural resources on land, the proportion of the total production value of marine economy to the total production value of domestic is increasing year by year. The reasonable development of ocean resources can make up the problem of resource shortage in China to a certain extent, which is very important for the development of economic society in China.
However, unlike general images acquired on the ground, due to the complex and special underwater environment, the acquired original images usually show low contrast, color distortion and loss of details, and it is difficult to acquire high-quality underwater images. Currently existing underwater image sharpening techniques are roughly classified into three main categories: an image enhancement method, a model-based image restoration method and a deep learning-based method.
The deep learning-based method has common problems of large required sample amount, long training time, unknown physical mechanism and the like. Although the image enhancement method is simple and has a high processing speed, the phenomenon of over-enhancement or under-enhancement is often generated. In comparison, although the theory of the image restoration method based on the model is relatively complex, and a plurality of parameters need to be solved, the imaging model relatively accords with an objective degradation mechanism, underwater images shot under different environments can be better processed, and the restored images are closer to real images, so that the underwater image restoration method has a better application prospect.
At present, most restoration methods based on physical models are inaccurate in estimation of transmissivity and background light, cannot be applied to special environments, are insufficient in robustness, and cannot perform adaptive adjustment when different types of degraded images are restored.
Disclosure of Invention
In view of the defects of the prior art, the invention provides an underwater image restoration method based on dark channel prior. The method determines the background light value by utilizing the distribution characteristic of the aggregative property of the background light candidate points and the smooth characteristic of the neighborhood around the candidate points, calculates the transmittance with wider application scene by equivalent transformation by combining the characteristics and the respective advantages of the traditional dark channel prior and the modified red channel prior transmittance expression, preliminarily recovers the underwater image, then performs color compensation in a Lab color space, and finally adjusts the image contrast and the saturation by utilizing a modified Gamma model to obtain the final recovered image.
An underwater image restoration method based on dark channel prior comprises the following steps:
s1: acquiring an original image, and constructing an underwater image restoration model based on the original image;
s2: segmenting an original image by using an SLIC super-pixel segmentation algorithm, replacing a dark channel fixed square filter window with an obtained super-pixel segmentation block to obtain a dark channel image, wherein the SLIC super-pixel segmentation algorithm is used for locally clustering image pixels according to a distance measurement standard;
s3: self-adaptive selection is carried out on the background light candidate points in the underwater image dark channel by utilizing the distribution characteristics of the aggregative property of the background light candidate points and the smooth characteristics of the peripheral neighborhood of the candidate points, and finally the background light value is determined;
s4: calculating self-adaptive transmissivity according to the input image and the background light value;
s5: solving a preliminary recovery image based on the underwater image recovery model, the final background light value and the self-adaptive transmission image;
s6: and performing color compensation on the color channel a and b components of the primary restored image in the Lab color space, and adjusting the contrast and saturation of the image by using a Gamma correction model on the whole image after the color compensation to obtain the final restored image.
Further, the underwater image restoration model in S1 is:
I λ (x)=J λ (x)t λ (x)+B λ (x)(1-t λ (x))
wherein, I λ (x) Is the original input image, J λ (x) Is the restored output image, t λ (x) Denotes the transmittance, B λ (x) Indicating global backlight, λ ∈ (R, G, B).
Further, in S2, segmenting the original image by using the SLIC superpixel segmentation algorithm includes:
s21: initializing seed points, and uniformly distributing the seed points in the image according to the set number of super pixels;
s22, reselecting the seed points in the n-x-n neighborhood of the seed points, calculating gradient values of all pixel points in the neighborhood, and moving the seed points to the minimum gradient position in the neighborhood;
s23: distributing a class label for each pixel point in the neighborhood around each seed point;
s24, for each searched pixel point, respectively calculating the judging distance between the pixel point and the seed point, and taking the seed point corresponding to the minimum value of the judging distance between each pixel point and the surrounding seed points as the clustering center of the pixel point, wherein the distance comprises a color component distance and a space component distance, and the specific calculation process comprises the following steps of
Figure BDA0003946846500000031
Wherein d is s (k, i) represents the kth seed point (x) of the image pixel k ,y k ) And image pixel (x) i ,y i ) Distance of spatial component between, d c (k, i) represents the k seed point (x) of the CIE LAB color space k ,y k ) And image pixel (x) i ,y i ) Color component distance between (x) i ,y i ) E.g. delta, delta is (x) k ,y k ) As a center, S is a neighborhood of radii, N s Is the maximum spatial distance within the class, defined asN s =sqrt(N/K),N c For maximum color distance, D (k, i) is the pixel point (x) i ,y i ) And seed point (x) k ,y k ) Taking the seed point corresponding to the minimum value as the clustering center of the pixel point;
step S25: iterative optimization is performed by repeatedly performing S22 to S24 until the error converges.
Further, the step of fixing the square filtering window by replacing the dark channel with the obtained super-pixel segmentation block in S2 includes obtaining a dark channel map according to the following formula:
Figure BDA0003946846500000032
wherein, J λ (x) Three channels of RGB representing image J, λ ∈ (R, G, B), Ω (x) being a superpixel region, J D (x) Representing the dark channel of image J.
Further, in S3, the background light candidate point in the underwater image dark channel is adaptively selected by using the distribution characteristic of the background light candidate point aggregative property and the smooth characteristic of the neighborhood around the candidate point, and the background light value is finally determined, including the following steps:
step S31: selecting the brightest first 1% of pixel points in the dark channel image, wherein the corresponding points in the original image are points to be selected;
step S32: selecting a point set to be selected according to the quantity distribution of the points to be selected in the original image superpixel block and the smooth characteristics of the neighborhood of the point set to be selected, and specifically comprising the following steps: comparing the number of point sets contained in every two super-pixel blocks, if the number of point sets in one super-pixel block is greater than that in the other super-pixel block and the difference of the number of point sets of the two super-pixel blocks is greater than half of the number of point sets of the super-pixel block with less number of point sets, then the point set in the super-pixel block with more number of point sets is the point set to be selected, otherwise, the point set of the super-pixel block with small average gradient in the two super-pixel blocks is selected as the point set to be selected, and all super-pixel blocks in the original image are traversed to select the final point set to be selected;
step S33: m is satisfied in the original image pixel corresponding to the selected point setax(V G OR V B -V R ) The pixel value of the point is the background light value, wherein V G ,V B ,V R And the pixel values of R, G and B color channels of the pixel points in the to-be-selected point set are respectively.
Further, in S4, combining the characteristics of the transmittance expression corresponding to the dark channel prior and the corrected red channel prior, performing an equivalent transformation to obtain an adaptive transmittance expression, including calculating an adaptive transmittance according to the following formula:
Figure BDA0003946846500000041
wherein, I λ (y) is the original input image, t ap (x) Denotes a new adaptive transmittance, B λ Indicating global backlight, λ ∈ (R, G, B).
Further, S5, solving a preliminary recovery image based on the underwater image recovery model, the final background light value and the adaptive transmission map, wherein the preliminary recovery image is obtained according to the following formula:
Figure BDA0003946846500000042
wherein, I λ Is the original input image, J λ (x) Is the restored output image, t ap (x) Denotes the adaptive transmittance, t 0 Represents a transmittance threshold, B λ Representing a global backlight.
Further, the color compensation of the color channel a, b components of the preliminary restored image in the Lab color space in S6 includes compensating the preliminary restored image according to the following formula:
Figure BDA0003946846500000043
wherein, a new For the compensated a channel, b new For the compensated b channel, a, b are the color channels of the preliminary restored image,
Figure BDA0003946846500000044
The average values of the a channel and the b channel are respectively.
Further, in S6, the step of adjusting the contrast and saturation of the image by using the Gamma correction model for the whole preliminary restored image includes adjusting the preliminary restored image according to the following formula:
Figure BDA0003946846500000051
where O (x) represents the output pixel value, I (x) represents the input pixel value, I low Is the minimum intensity value, I, between the input images high Maximum intensity value of input image section, O low Denotes the desired stretching minimum, O high At the desired stretch maximum.
Compared with the prior art, the invention has the following advantages:
1. in order to solve the problem of misjudgment of bright pixels or white object pixels at the foreground of an underwater environment on the background light of an underwater image, the method introduces a superpixel segmentation algorithm to distinguish objects with different depth of field and backgrounds on the basis of dark channel prior, reduces the negative influence of the severe depth of field on the dark channel acquisition, acquires a more accurate dark channel image, and then adaptively selects the background light candidate point in the dark channel of the underwater image by utilizing the distribution characteristic of the aggregative property of the background light candidate point and the smooth characteristic of the neighborhood around the candidate point to determine the final background light value.
2. According to the invention, by combining the characteristics of the transmittance expression corresponding to the dark channel prior and the corrected red channel prior, equivalent conversion is carried out to obtain the adaptive transmittance expression with a wider use scene, so that the transmittance is estimated more prepending, and the details of the primary restored image are greatly improved;
3. according to the invention, the color compensation is carried out on the components a and b of the color channels in the Lab color space, and the contrast and the saturation of the image are adjusted by using a Gamma correction model for the whole image, so that the definition, the contrast and the color saturation of the finally restored image are greatly improved.
For the above reasons, the present invention can be widely applied to the fields of image processing and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an underwater image restoration method based on dark channel prior in the present invention.
Fig. 2 is a graph showing the effect of restoration of a coral image by the restoration method of the present invention compared with other restoration methods, wherein (a) shows an initial image before restoration, (b) shows a result of processing by the UDCP method, (c) shows a result of processing by the MSRCR method, (d) shows a result of processing by the Fusion method, (e) shows a result of processing by the present invention, and (f) shows an original image attached to a data set.
Fig. 3 is a graph showing a comparison between the restoration effect of the restoration method according to the present invention and the restoration effect of another restoration method on an image of a diver, where (a) shows an initial image before restoration, (b) shows a result of processing using the UDCP method, (c) shows a result of processing using the MSRCR method, (d) shows a result of processing using the Fusion method, (e) shows a result of processing using the present invention, and (f) shows an original image attached to a data set.
Fig. 4 is a graph showing a comparison of the restoration effect of the restoration method according to the present invention on a school image with that of another restoration method, in which (a) shows an initial image before restoration, (b) shows a result graph of processing using the UDCP method, (c) shows a result graph of processing using the MSRCR method, (d) shows a result graph of processing using the Fusion method, (e) shows a result graph of processing using the present invention, and (f) shows an original image attached to a data set.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides an underwater image restoration method based on dark channel prior, which includes the following steps:
s1: and acquiring an original image, and creating an underwater image restoration model for the original image.
The underwater image restoration model comprises the following steps:
I λ (x)=J λ (x)t λ (x)+B λ (x)(1-t λ (x))
wherein, I λ (x) Is the original input image, J λ (x) Is the restored output image, t λ (x) Denotes the transmittance, B λ (x) Indicating global backlight, λ ∈ (R, G, B).
Step S2: and (3) segmenting the original image by using an SLIC super-pixel segmentation algorithm, replacing a dark channel fixed square filter window with the obtained super-pixel segmentation block to obtain a dark channel image, wherein the SLIC super-pixel segmentation algorithm is used for carrying out local clustering on image pixels according to a distance measurement standard.
In S2, the original image is segmented by using an SLIC superpixel segmentation algorithm, and the segmentation comprises the following steps:
s21: initializing a seed point: and uniformly distributing the seed points in the image according to the set number of the super pixels. Assuming that the picture has N pixel points in total and is pre-divided into K super pixels with the same size, the size of each super pixel is N/K, and the distance (step length) between adjacent seed points is approximately S = sqrt (N/K).
S22, reselecting the seed point in n x n neighborhood of the seed point (generally taking n = 3). The specific method comprises the following steps: and calculating gradient values of all pixel points in the neighborhood, and moving the seed point to the place with the minimum gradient in the neighborhood. The purpose of this is to avoid the seed points falling on the contour boundary with larger gradient so as not to affect the subsequent clustering effect.
S23: each pixel point is assigned a class label (i.e., to which seed point) in the neighborhood around each seed point. The search range is limited to 2s × 2s, which speeds up algorithm convergence.
Distance measures including color distance and spatial distance are included S24. For each searched pixel point, the distance between the pixel point and the seed point is calculated respectively. Because each pixel point can be searched by a plurality of seed points, each pixel point has a distance with the surrounding seed points, and the seed point corresponding to the minimum value is taken as the clustering center of the pixel point. The specific calculation process is as follows:
Figure BDA0003946846500000071
wherein d is s (k, i) represents the kth seed point (x) of an image pixel k ,y k ) And image pixel (x) i ,y i ) Spatial distance between, d c (k, i) represents the k seed point (x) of the CIE LAB color space k ,y k ) And image pixel (x) i ,y i ) Color component distance (x) between i ,y i ) E is delta, delta is (x) k ,y k ) As the center, S is the neighborhood of the radius. N is a radical of s Is the maximum spatial distance within the class, defined as N s =qrt(N/K),N c For the maximum color distance, which varies from picture to picture, a fixed constant m is generally used instead, and m belongs to [1,40 ]]D (k, i) is a pixel point (x) i ,y i ) And seed point (x) k ,y k ) And (4) taking the seed point corresponding to the minimum value as the clustering center of the pixel point.
Step S25: iterative optimization, in theory, iteration from S22 to S24 is continued until errors converge, and practical research finds that 10 iterations can obtain ideal effects on most pictures, so that the number of the general iterations is 10.
Replacing a dark channel with the obtained super-pixel segmentation block to fix a square filtering window, and acquiring a dark channel image according to the following formula:
Figure BDA0003946846500000081
wherein, J λ (x) Three channels of RGB representing image J, λ ∈ (R, G, B), Ω (x) being the super-pixel fast region, J D (x) Representing the dark channel of image J.
And step S3: and self-adaptive selection is carried out on the background light candidate points in the underwater image dark channel by utilizing the distribution characteristics of the aggregative property of the background light candidate points and the smooth characteristics of the peripheral neighborhood of the candidate points, and finally the background light value is determined.
The step S3 of determining the final background light based on the characteristics of the background light candidate points comprises the following steps:
step S31: and selecting the brightest first 1% of pixel points in the dark channel image, wherein the corresponding points in the original image are points to be selected.
Step S32: selecting a to-be-selected point set according to the quantity distribution of the to-be-selected points in the original image super pixel block and the smooth characteristics of the neighborhood of the to-be-selected point set, and specifically comprises the following steps: comparing the number of the point sets contained in every two super pixel blocks, if the number of the point sets in one super pixel block is larger than that in the other super pixel block and the difference of the number of the point sets of the two super pixel blocks is larger than half of the number of the point sets of the super pixel block with less number of the point sets, the point set in the super pixel block with more number of the point sets is the point set to be selected, otherwise, the point set of the super pixel block with small average gradient in the two super pixel blocks is selected as the point set to be selected, and all the super pixel blocks in the original image are traversed to select the final point set to be selected. The way of determining the candidate point set is exemplified as follows: comparing the number of the points to be selected in the A and B ultrasonic pixel blocks, if the number of the point sets in the A is larger than that of the point sets in the B, and the difference between the number of the point sets in the A and the number of the point sets in the B is larger than half of that of the point sets in the B, the point set in the A is the point set to be selected, and if the condition is not met, the point set in the block with the minimum average gradient of the A and B ultrasonic pixel blocks is selected to be the point set to be selected. According to the principle, super pixel blocks in the original image are compared in sequence, and a final point set to be selected is selected.
Step S33: selecting a set of points to be selectedMax (V) is satisfied in the corresponding original pixel G OR V B -V R ) The pixel value of the point is the background light value, wherein, V G ,V B ,V R And the pixel values of R, G and B color channels of the pixel points in the to-be-selected point set are respectively.
And step S4: and equivalently converting the transmittance expression into a self-adaptive transmittance expression with a wider use scene by combining the dark channel prior and the corrected transmittance expression corresponding to the red channel prior. Specifically, the adaptive transmittance is calculated from the input map and the backlight value:
Figure BDA0003946846500000091
wherein, I λ (y) is the original input image, t ap (x) Denotes a new adaptive transmittance, B λ Indicating global backlight, λ ∈ (R, G, B).
Step S5: and (4) solving the preliminary restored image by using the underwater image restoration model, the background light value obtained in the step (S3) and the self-adaptive transmission image in the step (S4).
Solving the preliminary restored image formula as follows:
Figure BDA0003946846500000092
wherein, I λ Is the original input image, J λ (x) Is the restored output image, t ap (x) Denotes the adaptive transmittance, t 0 The lower limit is 0.1, B, which shows that the restored image is too bright to avoid excessively low transmittance λ Representing a global backlight.
Step S6: and performing color compensation on the components a and b of the color channels in the Lab color space, and adjusting the contrast and saturation of the image by using a Gamma correction model for the whole image to obtain a final restored image.
In step S6, the color compensation formula and the Gamma correction model expression for the a and b channels in the Lab color space are as follows:
step S61: color compensation is carried out on the a and b color channels in the Lab color space:
Figure BDA0003946846500000093
wherein, a new For the compensated a channel, b new B is the compensated b channel, a, b are the color channels of the preliminary restored image,
Figure BDA0003946846500000094
the channel mean values are a and b respectively. When a =0, b =0, the color channel will assume a neutral gray value.
Step S62: the expression using the Gamma correction model on the basis of the cumulative histogram is:
Figure BDA0003946846500000101
where O (x) represents the output pixel value, I (x) represents the input pixel value, I low Is the minimum intensity value, I, of the input image interval high Maximum intensity value of input image section, O low Denotes the desired stretching minimum, O high At the desired maximum stretch.
In order to verify the effectiveness of the underwater image restoration, underwater images of different scenes are selected as a test set, and simultaneously compared and analyzed with the experimental results of a UDCP method, a MSRCR method and a Fusion method from the qualitative aspect and the quantitative aspect. In fig. 2 and fig. 3, the data set is UIEB (Underwater Image Enhancement Benchmark Dataset), which is a pair data set of Li et al open source, the original Image is from an Underwater real Image, and the group channel is from a plurality of traditional methods. The data set used in fig. 4 is SUID (Synthetic undersater Image Dataset), which is open source on IEEE resource, where Ground route is a real Image on land and the original Image is modeled by an algorithm.
As shown in fig. 2, fig. 3, and fig. 4, the underwater image restoration effect comparison graph of the present invention with other methods shows that the UDCP method cannot perfectly eliminate color cast, the restored image quality is not high, the MSRCR method and the Fusion method can remove color cast, but cause image color distortion to a certain extent, introduce other color cast, and the long-range detail recovery is poor, and the definition is not high.
In the embodiment, the experimental results of different methods are compared according to three objective indexes of UCIQE, UIQM and image Encopy; as can be seen from the data in table 1, table 2 and table 3, the UCIQE, UIQM and entry of the result graph corresponding to the UDCP method, the MSRCR method, the Fusion method and the method of the present invention are all larger than the original image, and are all improved to a certain extent; the UDCP method is small in improvement on each index of the image, and the MSRCR method and the Fusion method are high in improvement on the indexes of UCIQE and UIQM of the image, but are small in increase of entropy of image information, and the average information amount of the restored image is small. This indicates that the UDCP method, the MSRCR method, and the Fusion method improve the image quality to some extent, but the overall visual effect and the entropy improvement of the image information are not large. The data show that the method greatly improves the UCIQE, UIQM and image information entropy of the original underwater image, and is superior to other underwater image restoration methods.
TABLE 1 UCIQE comparison of results of the inventive and other methods
Figure BDA0003946846500000111
TABLE 2 UIQM comparison of results of inventive and other methods
Figure BDA0003946846500000112
TABLE 3 Encopy comparison of results of inventive and other methods
Figure BDA0003946846500000113
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. An underwater image restoration method based on dark channel prior is characterized by comprising the following steps:
s1: acquiring an original image, and constructing an underwater image restoration model based on the original image;
s2: segmenting an original image by using an SLIC super-pixel segmentation algorithm, replacing a dark channel fixed square filter window with an obtained super-pixel segmentation block to obtain a dark channel image, wherein the SLIC super-pixel segmentation algorithm is used for locally clustering image pixels according to a distance measurement standard;
s3: self-adaptive selection is carried out on the background light candidate points in the underwater image dark channel by utilizing the distribution characteristics of the aggregative property of the background light candidate points and the smooth characteristics of the peripheral neighborhood of the candidate points, and finally the background light value is determined;
s4: calculating self-adaptive transmissivity according to the input image and the background light value;
s5: solving a preliminary restored image based on the underwater image restoration model, the final backlight value and the self-adaptive transmission image;
s6: and performing color compensation on the color channel a and b components of the primary restored image in the Lab color space, and adjusting the contrast and saturation of the image by using a Gamma correction model on the whole image after the color compensation to obtain the final restored image.
2. The method for restoring the underwater image based on the dark channel prior as claimed in claim 1, wherein the underwater image restoration model in S1 is:
I λ (x)=J λ (x)t λ (x)+B λ (x)(1-t λ (x))
wherein, I λ (x) Is the original input image, J λ (x) Is the restored output image, t λ (x) Denotes the transmittance, B λ (x) Indicating global backlight, λ ∈ (R, G, B).
3. The method for underwater image restoration based on dark channel prior as claimed in claim 1, wherein the segmentation of the original image in S2 using SLIC superpixel segmentation algorithm comprises:
s21: initializing seed points, and uniformly distributing the seed points in the image according to the set number of the super pixels;
s22, reselecting the seed points in the n-x-n neighborhood of the seed points, calculating gradient values of all pixel points in the neighborhood, and moving the seed points to the minimum gradient position in the neighborhood;
s23: distributing a class label for each pixel point in the neighborhood around each seed point;
s24, respectively calculating the judgment distance between each searched pixel point and the seed point, taking the seed point corresponding to the minimum value of the judgment distance between each pixel point and the surrounding seed points as the clustering center of the pixel point, wherein the distance comprises a color component distance and a space component distance, and the specific calculation process comprises the following steps
Figure FDA0003946846490000021
Wherein, d s (k, i) represents the kth seed point (x) of the image pixel k ,y k ) And image pixel (x) i ,y i ) Distance of spatial component between, d c (k, i) represents the k seed point (x) of the CIE LAB color space k ,y k ) And image pixel (x) i ,y i ) Color component distance between (x) i ,y i ) E.g. delta, delta is (x) k ,y k ) As a center, S is a neighborhood of radii, N s Is the maximum spatial distance within the class, defined as N s =sqrt(N/K),N c For maximum color distance, D (k, i) is the pixel point (x) i ,y i ) And seed point (x) k ,y k ) Taking the seed point corresponding to the minimum value as the clustering center of the pixel point;
step S25: iterative optimization is performed by repeatedly performing S22 to S24 until the error converges.
4. The method for restoring the underwater image based on the dark channel prior as claimed in claim 1, wherein the step of fixing the square filtering window by replacing the dark channel with the obtained superpixel segmentation block in S2 comprises the steps of obtaining a dark channel map according to the following formula:
Figure FDA0003946846490000022
wherein, J λ (x) Three channels of RGB representing image J, λ ∈ (R, G, B), Ω (x) being a superpixel region, J D (x) Representing the dark channel of image J.
5. The method for restoring the underwater image based on the dark channel prior as claimed in claim 1, wherein in S3, the distribution characteristic of the aggregation of the background light candidate points and the smooth characteristic of the neighborhood around the candidate points are utilized to perform adaptive selection on the background light candidate points in the dark channel of the underwater image, and finally determine the background light value, comprising the following steps:
step S31: selecting the first 1% of the brightest pixel points in the dark channel image, wherein the corresponding points in the original image are points to be selected;
step S32: selecting a to-be-selected point set according to the quantity distribution of the to-be-selected points in the original image super pixel block and the smooth characteristics of the neighborhood of the to-be-selected point set, and specifically comprises the following steps: comparing the number of point sets contained in every two super-pixel blocks, if the number of point sets in one super-pixel block is greater than that in the other super-pixel block and the difference of the number of point sets of the two super-pixel blocks is greater than half of the number of point sets of the super-pixel block with less number of point sets, then the point set in the super-pixel block with more number of point sets is the point set to be selected, otherwise, the point set of the super-pixel block with small average gradient in the two super-pixel blocks is selected as the point set to be selected, and all super-pixel blocks in the original image are traversed to select the final point set to be selected;
step S33: max (V) is satisfied in the original pixel corresponding to the selected point set G OR V B -V R ) The pixel value of the point is the background light value, wherein, V G ,V B ,V R And the pixel values of R, G and B color channels of the pixel points in the to-be-selected point set are respectively.
6. The underwater image restoration method based on the dark channel prior as claimed in claim 1, wherein in S4, the characteristics of the transmittance expression corresponding to the dark channel prior and the corrected red channel prior are combined, and an equivalent transformation is performed to obtain an adaptive transmittance expression, which includes calculating the adaptive transmittance according to the following formula:
Figure FDA0003946846490000031
wherein, I λ (y) is the original input image, t ap (x) Denotes a new adaptive transmittance, B λ Indicating global backlight, λ ∈ (R, G, B).
7. The method for restoring the underwater image based on the dark channel prior as claimed in claim 1, wherein S5 solving the preliminary restored image based on the underwater image restoration model, the final backlight value and the adaptive transmission map comprises obtaining the preliminary restored image according to the following formula:
Figure FDA0003946846490000032
wherein, I λ Is the original input image, J λ (x) Is the restored output image, t ap (x) Denotes the adaptive transmittance, t 0 Represents a threshold value of transmittance, B λ Representing a global backlight.
8. The method of claim 1, wherein the color compensation of the color channel a, b components of the preliminary restored image in the Lab color space in S6 comprises compensating the preliminary restored image according to the following formula:
Figure FDA0003946846490000033
wherein, a new For compensated a-channel, b new B is the compensated b channel, a, b are the color channels of the preliminary restored image,
Figure FDA0003946846490000034
the channel mean values are a and b respectively. />
9. The method of claim 8, wherein the step of adjusting the contrast and saturation of the image by using the Gamma correction model for the entire preliminary restored image in S6 comprises adjusting the preliminary restored image according to the following formula:
Figure FDA0003946846490000041
where O (x) represents the output pixel value, I (x) represents the input pixel value, I low Is the minimum intensity value, I, between the input images high Maximum intensity value of input image section, O low Denotes the desired stretching minimum, O high At the desired maximum stretch.
CN202211436297.7A 2022-11-16 2022-11-16 Underwater image restoration method based on dark channel prior Pending CN115908178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211436297.7A CN115908178A (en) 2022-11-16 2022-11-16 Underwater image restoration method based on dark channel prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211436297.7A CN115908178A (en) 2022-11-16 2022-11-16 Underwater image restoration method based on dark channel prior

Publications (1)

Publication Number Publication Date
CN115908178A true CN115908178A (en) 2023-04-04

Family

ID=86476172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211436297.7A Pending CN115908178A (en) 2022-11-16 2022-11-16 Underwater image restoration method based on dark channel prior

Country Status (1)

Country Link
CN (1) CN115908178A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118195980A (en) * 2024-05-11 2024-06-14 中国科学院长春光学精密机械与物理研究所 Dark part detail enhancement method based on gray level transformation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118195980A (en) * 2024-05-11 2024-06-14 中国科学院长春光学精密机械与物理研究所 Dark part detail enhancement method based on gray level transformation

Similar Documents

Publication Publication Date Title
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
CN104899845B (en) A kind of more exposure image fusion methods based on the migration of l α β spatial scenes
CN109872285B (en) Retinex low-illumination color image enhancement method based on variational constraint
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN109447917B (en) Remote sensing image haze eliminating method based on content, characteristics and multi-scale model
CN109118446B (en) Underwater image restoration and denoising method
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN105046653B (en) A kind of video raindrop minimizing technology and system
CN110428371A (en) Image defogging method, system, storage medium and electronic equipment based on super-pixel segmentation
CN108133462B (en) Single image restoration method based on gradient field region segmentation
Zhou et al. Multicolor light attenuation modeling for underwater image restoration
CN115731146B (en) Multi-exposure image fusion method based on color gradient histogram feature optical flow estimation
CN112561899A (en) Electric power inspection image identification method
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
Zhang et al. Hierarchical attention aggregation with multi-resolution feature learning for GAN-based underwater image enhancement
CN110807406B (en) Foggy day detection method and device
CN115908178A (en) Underwater image restoration method based on dark channel prior
CN115457551A (en) Leaf damage identification method suitable for small sample condition
CN110717960B (en) Method for generating building rubbish remote sensing image sample
CN116433525A (en) Underwater image defogging method based on edge detection function variation model
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
CN114581339A (en) Metal industrial product surface defect data enhancement method
CN114120061A (en) Small target defect detection method and system for power inspection scene
Zhao et al. Single image dehazing based on enhanced generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination