CN107169942B - Underwater image enhancement method based on fish retina mechanism - Google Patents
Underwater image enhancement method based on fish retina mechanism Download PDFInfo
- Publication number
- CN107169942B CN107169942B CN201710573257.XA CN201710573257A CN107169942B CN 107169942 B CN107169942 B CN 107169942B CN 201710573257 A CN201710573257 A CN 201710573257A CN 107169942 B CN107169942 B CN 107169942B
- Authority
- CN
- China
- Prior art keywords
- channel
- value
- receptive field
- calculating
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 241000251468 Actinopterygii Species 0.000 title claims abstract description 29
- 210000001525 retina Anatomy 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 11
- 230000002093 peripheral effect Effects 0.000 claims description 7
- 101100458111 Bacillus subtilis (strain 168) mreD gene Proteins 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 2
- 210000004027 cell Anatomy 0.000 abstract description 12
- VYFYYTLLBUKUHU-UHFFFAOYSA-N dopamine Chemical compound NCCC1=CC=C(O)C(O)=C1 VYFYYTLLBUKUHU-UHFFFAOYSA-N 0.000 abstract description 4
- 230000000694 effects Effects 0.000 abstract description 4
- 210000002287 horizontal cell Anatomy 0.000 abstract description 4
- 210000000411 amacrine cell Anatomy 0.000 abstract description 2
- 230000008485 antagonism Effects 0.000 abstract description 2
- 238000013461 design Methods 0.000 abstract description 2
- 229960003638 dopamine Drugs 0.000 abstract description 2
- 230000002401 inhibitory effect Effects 0.000 abstract description 2
- 210000001995 reticulocyte Anatomy 0.000 abstract description 2
- 238000004088 simulation Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 230000003068 static effect Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G06T5/73—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
The invention discloses an underwater image enhancement method based on a fish retina mechanism, which simulates a feedback relation between fish retina horizontal cells and cone cells to remove color cast of an underwater image and simulates center and periphery antagonism of fish retina bipolar cells to remove blur of the underwater image. In the whole simulation process, the structure of inhibiting bipolar cells on the horizontal cell side of the fish retina is simulated to design a double-Gaussian difference filter of a bipolar cell receptive field; meanwhile, the sigmoid curve is utilized to simulate the activity of the reticulocytes for continuously releasing dopamine in the dark to regulate the level cells, so that the processed image is more in line with the visual mechanism of the fish; finally, gamma conversion is adopted to simulate the non-linear processing of the amacrine cells on the brightness information, and the central input of the color bipolar cells is formed.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to a color image enhancement technology, and particularly relates to an underwater image enhancement method based on a fish retina mechanism.
Background
With the continuous enhancement of human exploration capability, more and more underwater images are widely spread and applied. However, the image is blurred due to the back scattering and the forward scattering of suspended particles in the water body, and the underwater image has blue-green color cast due to different attenuation speeds of light waves with different wavelengths after the light enters the water. The image blurring and color cast can make the underwater image finally obtained by us not clear enough. Therefore, how to remove the influence of blur and color cast and obtain an underwater image with high contrast becomes a relatively important problem.
The existing image deblurring method is mainly based on dark channel prior assumption, the method is generally based on an atmospheric scattering physical model, a representative method is a method proposed by Chiang J Y equal to 2012, and references: chiang J Y, Chen Y C. lower water Image enhancement by wall height h compensation and ddehahazing [ J ]. IEEE Transactions on Image Processing,2012,21(4): 1756-. They all need to satisfy the dark channel prior to achieve good deblurring effect.
Color constancy methods include learning-based or static color constancy methods that restore the true color of an object, primarily by estimating the scene light source color. However, they are mainly directed to terrestrial scenes, ignoring some of the characteristics of underwater images. The learning-based method has certain difficulty in underwater image processing because no underwater image database with a standard light source exists at present; however, most of the static methods are based on a certain gray level hypothesis, but the red wave band of the general underwater image is obviously weaker than other wave bands, and the gray level hypothesis is not satisfied, so the color of the image corrected by the static method is mostly red, and even some improved static methods can have a certain effect only by satisfying the dark channel prior model. The fish retinas can solve the problems of color cast and blurring at the same time, however, a method for simulating the fish retinas to simultaneously process the color cast and the blurring does not exist at present.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an underwater image enhancement method based on a fish retina mechanism.
The technical scheme of the invention is as follows: an underwater image enhancement method based on a fish retina mechanism comprises the following steps:
s1, extracting color component and brightness component: respectively extracting red component I from each pixel point of underwater imageRGreen component IGBlue component IBAnd calculates the average luminance component I:
I=(IR+IG+IB)/3
s2, calculating the adjusted mean value of the RGB three channels: calculating the mean value M of the pixel points of the brightest part of the red channel with the pixel value larger than the first threshold valuerAs red drugThe adjusted mean value is calculated, and the respective mean values M of the green channel and the blue channel are calculatedg、Mb;
S3, correcting color cast of the image: dividing each pixel point of R, G, B three channels by the corresponding mean value thereof, and obtaining the updated value I 'of each channel after the processing is finished'R、I′G、I′BThe specific calculation formula is as follows;
and then stretching the updated value to the brightness of the original image, wherein the specific calculation formula is as follows:
wherein I 'represents I'R、I′G、I′BA composed image; mean represents averaging the image.
S4, calculating color channel and brightness channel, and sensing field input: obtaining R, G, B three channels updated value (I') for the luminance component I obtained in step S1 and the luminance component obtained in step S3R、I″G、I″B) Respectively filtering to obtain peripheral input f of the receptive fields of the four channelssI、fsR、fsG、fsB;
S5, calculating the center input of the receptive field of the brightness channel:
calculating the average value M of the brightness channel I obtained in the step S1, and if M is smaller than a second threshold value, inputting the center f of the brightness channel receptive fieldcITo adopt sigmoid function adjustment, and simultaneously, the (I') obtained in step S3R、I″G、I″B) Updating again by adopting sigmoid; otherwise make fcIIs ═ I, and (I ″)R、I″G、I″B) Updating is not carried out;
s6, calculating the weight occupied by the color channel and the brightness channel in the field experience period: k represents the RGB channel and brightness channel sensing field week weight, and the calculation formula is as follows:
where λ represents R, G, B three channels, and a is the maximum value for each channel. I ″)λ(x, y) is I ″, which is processed in step S5R、I″G、I″BPixel value, k, corresponding to the (x, y) positionMAXThe upper limit of the k value.
S7, calculating the response of the brightness channel receptive field: inputting the central and peripheral receptive field inputs f calculated in steps S4 and S5cIAnd fsISubstituting the double Gaussian difference function to calculate the receptive field response value of the brightness channel, wherein the specific calculation formula is as follows:
wherein the content of the first and second substances,representing a convolution, fcI(x,y)、fsI(x, y) denotes the field-of-view center and periphery inputs for points (x, y) in the image, g (m, n; σ)c)、g(m,n;σs) Representing a two-dimensional Gaussian function of size m x n, rodBpNamely the receptive field output result of the brightness channel.
S8, calculating RGBCentral input of three-channel receptive field: outputting rodB to the brightness channel receptive field obtained in the step S7pGamma conversion is carried out to obtain rodBp γAnd is compared with the I' processed in step S5R、I″G、I″BThe multiplication jointly forms the central input f of the receptive field of R, G, B three channelscThe specific calculation formula is as follows:
fcR=I″R*rodBp γ
fcG=I″G*rodBp γ
fcB=I″B*rodBp γ
wherein denotes a multiplication number;
s9, calculating RGB three-channel receptive field response and outputting: in the same step S7, the central input f of the receptor field of R, G, B three channels is obtained by calculating the steps S5 and S8cR、fcG、fcBAnd peripheral input fsR、fsG、fsBSubstituting the double Gaussian difference function to calculate the receptive field response of R, G, B three channels, wherein the specific calculation formula is as follows:
r, G, B receptor field response of three channelspR、BpG、BpBNamely, the defogged image after the enhancement of the three channels is recombined into an RGB image as the final output.
Further, the first threshold value in step S2 is 0.1, and the second threshold value in step S5 is 0.5.
Further, the brightest partial pixel points in step S2 are specifically the brightest 50% pixel points.
Further, the filtering in step S4 is specifically mean filtering.
Further, in the step S2, in order to avoid the adjusted red color channel average value being too high, the adjusted red color channel average value M is usedrGreater than the mean value M of the green channelgUsing the mean value M of the green channelgAs the final adjusted mean value of the red channel, namely:
Mr=min(Mr,Mg)。
further, step S5 inputs f to the center of the luminance channel receptive fieldcIThe sigmoid function is adopted for regulation specifically as follows:
further, the window width of the mean filter in step S4 is any size greater than 3 × 3 and less than 15 × 15, such as 7 × 7,9 × 9, etc.
Further, the gaussian functions of the center and periphery of the receptive field in step S7 and step S9 are specifically:
wherein, deltacThe value range of (d) is specifically 0.2-0.8, deltasTaking the value deltacThe value range of m and n is specifically an integer of 5-15.
Further, in the step S8, the value range of γ is specifically 0.4 to 0.6.
The invention has the beneficial effects that: the method provided by the invention simulates the feedback relationship between horizontal cells of fish retinas and cone cells to remove color cast of the underwater image, and simulates the center and periphery antagonism of bipolar cells of fish retinas to remove blur of the underwater image. In the whole simulation process, the structure of inhibiting bipolar cells on the horizontal cell side of the fish retina is simulated to design a double-Gaussian difference filter of a bipolar cell receptive field; meanwhile, the sigmoid curve is utilized to simulate the activity of the reticulocytes for continuously releasing dopamine in the dark to regulate the level cells, so that the processed image is more in line with the visual mechanism of the fish; finally, gamma conversion is adopted to simulate the non-linear processing of the amacrine cells on the brightness information, and the central input of the color bipolar cells is formed. The algorithm based on the invention can be embedded in the camera as an underwater mode to process the color cast and fuzzy problems of underwater images.
Drawings
Fig. 1 is a flowchart of underwater image processing according to an embodiment of the present invention.
Fig. 2 is an original image taken underwater with color shift and blur problems.
Fig. 3 shows the corresponding result of the original image after color cast removal.
Fig. 4 is a corresponding image of an original image after two updates.
Fig. 5 is a response image of the luminance channel receptive field.
Fig. 6 is a final output image with color cast and blur removed.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings.
The fish has strong adaptability to color cast and blur in the underwater image, and the learning of the fish vision system image processing process is helpful for solving the problems of color cast and blur in the underwater image shot by the camera. Based on this, the invention provides an underwater image enhancement method based on a fish retina mechanism, as shown in fig. 1, comprising the following steps:
for an underwater image with an image size of 768 × 1024 and color cast and blurring problems (as shown in fig. 2), the detailed procedure of the present invention is as follows:
s1, extracting color component and brightness component: respectively extracting red component I from each pixel point of underwater imageRGreen component IGBlue color, blue colorComponent IBAnd calculates the average luminance component I.
Take an example point 1 having a pixel value of (0.659, 0.718,0.463) and an example point 2 having a pixel value of (0.275,0.373, 0.212) in the original input image (fig. 2) as an example. Their corresponding average luminance components I are (0.659+0.718+0.463)/3 ═ 0.613 and (0.275+0.373+0.212)/3 ═ 0.286, respectively.
S2, calculating the adjusted mean value of the RGB three channels: the mean of the three channels is calculated separately, with the R channel calculating the brightest 50% of the pixels greater than 0.1. The mean of the first 50% of the pixels in the original image (FIG. 2) with R channel pixel values greater than 0.1 is 0.4231, so M isr0.4231, G channel has an average value of 0.5407, so Mg0.5407, channel B has an average value of 0.3367, so Mb0.3367. Since the mean value of the R channel is smaller than that of the G channel, the mean value M of the R channelrUnchanged, it is still 0.4231.
It should be noted that, instead of calculating the mean value of the R channel, the mean value of the G, B channel is calculated by a general method.
S3, correcting color cast of the image: dividing each pixel point of R, G, B three channels by the corresponding mean value thereof, and obtaining the updated value I 'of each channel after the processing is finished'R、I′G、I′BBy updating, the color cast of the image is removed.
Dividing pixels of the two example pixel point corresponding channels by the mean value to obtain updated pixel values I'R、I′G、I′BThese are (0.659/0.4231, 0.718/0.5407, 0.463/0.3367) ═ (1.5576, 1.3279, 1.3751) and (0.275/0.4231, 0.373/0.5407, 0.212/0.3367) ═ (0.6500, 0.6898, 0.6296), respectively.
At this time, is made of'R、I′G、I′BThe mean (I ') of the composed image I' was calculated to be 0.9985 and the mean (I) of the original image luminance was 0.4170, so exemplar Point 1 was stretched to I 'of the original image luminance'R、I′G、I′BRespectively become:
similarly, example Point 2 is stretched to I ″, corresponding to the original image lightnessR、I″G、I″BRespectively (0.2714, 0.2881, 0.2630). Through the update, the color cast of the original image is removed, and fig. 3 shows the corresponding image after the color cast is removed, so that it can be seen that the green color cast of the image is effectively removed.
S4, calculating color channel and brightness channel, and sensing field input: obtaining R, G, B three-channel updated value I' for the brightness channel I obtained at S1 and S3R、I″G、I″BRespectively carrying out mean value filtering to obtain peripheral input f of the receptive fields of the four channelssI、fsR、fsG、fsB。
In this embodiment, taking an average filter with a window width of 9 × 9 as an example, the average filtering is performed on the luminance graph I obtained in S1, and the values of the corresponding positions of the two example pixel points after the filtering are obtained are fsI0.7839 and 0.1327, respectively. For the updated RGB three-channel image I ″, obtained at S3R、I″G、I″BMean filtering is carried out, and f is carried out after mean filtering is carried out on the corresponding positions of the two sample pointssR、fsG、fsBRespectively (0.6597,0.7313, 0.7448) and (0.2930,0.1926, 0.1304).
S5, calculating the center input of the receptive field of the brightness channel: calculating the mean value of the luminance channel, the mean value M of the luminance channel I corresponding to the original input image (FIG. 2) is 0.4170, and since it is less than 0.5, sigmoid function is used:
central input f for calculating the receptive field of the luminance channelcI. Substituting the brightness I0.613 and 0.286 of the two sample points calculated in S1 into the above formula respectively to calculate the central input f of the receptive field of the brightness channelcI0.7559 and 0.1053, respectively.
At the same time, I ″, calculated for S3R、I″G、I″BThe values of (0.6505, 0.5546, 0.5743) are processed by the same function, and the values are substituted to obtain two example pixel points I ″R、I″G、I″BThe updated values are (0.8183, 0.6332,0.6777) and (0.0924,0.1073, 0.0855), respectively.
FIG. 4 shows I ″R、I″G、I″BAnd updating the corresponding image again.
S6, calculating the weight occupied by the color channel and the brightness channel in the field experience period: k represents the RGB channel and brightness channel sensing field week weight, and the calculation formula is as follows:
here, to avoid the phenomenon that the image is excessively enhanced, the k value should be set to a reasonable upper limit, which is set to 0.4 in the present embodiment, i.e., kMAXλ denotes R, G, B three channels, with a being the maximum value for each channel. I'λ(x, y) is I ″, which is processed in step S5R、I″G、I″BThe pixel value corresponding to the (x, y) position. At this time, the maximum value A of RGB three channelsλRespectively as follows: 0.9465, 0.8778, 0.9333, three channels of RGBThe calculation results of (a) are respectively: 0.2103,0.2066,0.1432, thereforeBecause 0.2151<0.4, so the value of k is 0.2151.
S7, calculating the response of the brightness channel receptive field: calculating the steps S4 and S5To the center and periphery of the luminance channel field input fcIAnd fsISubstituting the double Gaussian difference function to calculate the receptive field response value of the brightness channel, wherein the calculation formula is as follows:
in the present embodiment, the value is expressed by σc=0.5,σsWhere the field response values rodB of the luminance channels for the two exemplary points are calculated as 1.5, and m-n-9 for the examplep0.4745 and 0.2608, respectively.
Fig. 5 shows a response plot of the luminance channel receptive field.
S8, calculating the central input of RGB three-channel receptive field: outputs rodB to the brightness channel receptive field obtained in S7pGamma conversion is carried out, and I' obtained by the processing of step S5 is obtainedR、I″G、I″BCentral input f of receptive field forming R, G, B three channelsc。
In the gamma conversion described in this embodiment, γ is 0.5, and the central input f of the RGB three-channel receptive field is calculatedcR,fcG,fcB. The calculation process and result corresponding to example point 1 are:
fcB=I″R*rodBp γ=0.8183*0.47450.5=0.5637
fcG=I″G*rodBp γ=0.6332*0.47450.5=0.4363
fcB=I″B*rodBp γ=0.6777*0.47450.5=0.4669
similarly, example Point 2 corresponds to the center input f of the receptive fieldcR,fcG,fcBThe calculation result was (0.0637,0.0739, 0.0589).
S9, calculating RGB three-channel receptive field response and outputting: in the same step S7, the dual gaussian difference function is used to calculate R, G, B three-channel receptive field response in this embodiment:
f obtained by calculating S8cR,fcG,fcBSubstituting k calculated with S6 into the formula, σcAnd σsUsing the same value σ as in step S7c=0.5,σs1.5, m-n-9, obtaining three-channel receptive field response BpR、BpG、BpB. B of two example point correspondence locationspR、BpG、BpBThe calculation results are (1,0.778,0.865) and (0.022,0.0184,0.0376), respectively. And finally, outputting the calculated result.
Fig. 6 shows the final output image, with color shift and blur of the image effectively removed compared to the original image (fig. 2).
The simple example above is mainly illustrated with a single pixel value of the image as an example, the actual calculation being performed on all pixels of the whole image. By such a simple example, the whole process of the invention for simulating the removal of color cast and blurring of fish retinas is illustrated.
The embodiments described herein are intended to assist the reader in understanding the principles of the invention and it is to be understood that the scope of the invention is not limited to such specific statements and embodiments. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (9)
1. An underwater image enhancement method based on a fish retina mechanism comprises the following steps:
s1, extracting pigmentColor component and luminance component: respectively extracting red component I from each pixel point of underwater imageRGreen component IGBlue component IBAnd calculates the average luminance component I:
I=(IR+IG+IB)/3
s2, calculating the adjusted mean value of the RGB three channels: calculating the mean value M of the pixel points of the brightest part of the red channel with the pixel value larger than the first threshold valuerThe average value M of the green channel and the blue channel is calculated as the adjusted average value of the red channelg、Mb;
S3, correcting color cast of the image: dividing each pixel point of R, G, B three channels by the corresponding mean value thereof, and obtaining the updated value I 'of each channel after the processing is finished'R、I′G、I′BThe specific calculation formula is as follows;
and then stretching the updated value to the brightness of the original image, wherein the specific calculation formula is as follows:
wherein I 'represents I'R、I′G、I′BForming an image, wherein mean represents the mean value of the image;
s4, calculating color channel and brightness channel, and sensing field input: stretching the luminance component I obtained in step S1 and the R, G, B three-channel updated value obtained in step S3 to the original image luminance I ″R、I″G、I″BRespectively filtering to obtain peripheral input f of the receptive fields of the four channelssI、fsR、fsG、fsB;
S5, calculating the center input of the receptive field of the brightness channel:
calculating the average value M of the average brightness component I obtained in the step S1, and if M is smaller than a second threshold value, inputting the center f of the brightness channel receptive fieldcIAdopting sigmoid function to adjust, and simultaneously, obtaining I' in step S3R、I″G、I″BUpdating again by adopting sigmoid; otherwise make fcIIs ═ I, and I ″)R、I″G、I″BUpdating is not carried out;
s6, calculating the weight occupied by the color channel and the brightness channel in the field experience period: k represents the RGB channel and brightness channel sensing field week weight, and the calculation formula is as follows:
wherein, λ represents R, G, B three channels, A is the maximum value corresponding to each channel, I ″λ(x, y) is I ″, which is processed in step S5R、I″G、I″BPixel value, k, corresponding to the (x, y) positionMAXIs the upper limit of the k value;
s7, calculating the response of the brightness channel receptive field: inputting the central and peripheral receptive field inputs f calculated in steps S4 and S5cIAnd fsISubstituting the double Gaussian difference function to calculate the receptive field response value of the brightness channel, wherein the specific calculation formula is as follows:
wherein the content of the first and second substances,representing a convolution, fcI(x,y)、fsI(x, y) denotes the field-of-view center and periphery inputs for points (x, y) in the image, g (m, n; σ)c)、g(m,n;σs) Representing a two-dimensional Gaussian function of size m x n, rodBpThe result is the receptive field output result of the brightness channel;
s8, calculating the central input of RGB three-channel receptive field: outputting rodB to the brightness channel receptive field obtained in the step S7pGamma conversion is carried out to obtain rodBp γAnd is compared with the I' processed in step S5R、I″G、I″BThe multiplication jointly forms the central input f of the receptive field of R, G, B three channelscThe specific calculation formula is as follows:
fcR=I″R*rodBp γ
fcB=I″G*rodBp γ
fcB=I″B*rodBp γ
wherein denotes a multiplication number;
s9, calculating RGB three-channel receptive field response and outputting: in the same step S7, the central input f of the receptor field of R, G, B three channels is obtained by calculating the steps S4 and S8cR、fcG、fcBAnd peripheral input fsR、fsG、fsBSubstituting the double Gaussian difference function to calculate the receptive field response of R, G, B three channels, wherein the specific calculation formula is as follows:
r, G, B receptor field response of three channelspR、BpG、BpBNamely, the defogged image after the enhancement of the three channels is recombined into an RGB image as the final output.
2. The underwater image enhancement method based on fish retina mechanism of claim 1, wherein said first threshold of step S2 is 0.1, and said second threshold of step S5 is 0.5.
3. The underwater image enhancement method based on the fish retina mechanism of claim 1, wherein the brightest portion of the pixels in step S2 is specifically the brightest 50% of the pixels.
4. The underwater image enhancement method based on the fish retina mechanism of claim 1, wherein the filtering of step S4 is mean filtering.
5. The underwater image enhancement method based on fish retina mechanism of claim 1, wherein in step S2, to avoid the adjusted red channel mean value being too high, the adjusted red channel mean value M is usedrGreater than the mean value M of the green channelgUsing the mean value M of the green channelgAs the final adjusted mean value of the red channel, namely:
Mr=min(Mr,Mg)。
7. the underwater image enhancement method based on the fish retina mechanism of claim 4, wherein the window width of the mean filter of step S4 is any size greater than 3 × 3 and less than 15 × 15.
8. The underwater image enhancement method based on fish retina mechanism of claim 1, wherein the gaussian functions of the center and periphery of the receptive field in steps S7 and S9 are specifically:
wherein, deltacThe value range of (d) is specifically 0.2-0.8, deltasTaking the value deltacThe value range of m and n is specifically an integer of 5-15.
9. The underwater image enhancement method based on the fish retina mechanism according to claim 1, wherein a value range of γ in step S8 is specifically 0.4-0.6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710573257.XA CN107169942B (en) | 2017-07-10 | 2017-07-10 | Underwater image enhancement method based on fish retina mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710573257.XA CN107169942B (en) | 2017-07-10 | 2017-07-10 | Underwater image enhancement method based on fish retina mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107169942A CN107169942A (en) | 2017-09-15 |
CN107169942B true CN107169942B (en) | 2020-07-07 |
Family
ID=59818650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710573257.XA Active CN107169942B (en) | 2017-07-10 | 2017-07-10 | Underwater image enhancement method based on fish retina mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169942B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909617B (en) * | 2017-11-13 | 2020-03-17 | 四川大学 | Light source color estimation method based on nonlinear contrast weighting |
CN108537852B (en) * | 2018-04-17 | 2020-07-07 | 四川大学 | Self-adaptive color constancy method based on image local contrast |
CN109919873B (en) * | 2019-03-07 | 2020-12-29 | 电子科技大学 | Fundus image enhancement method based on image decomposition |
CN111639588A (en) * | 2020-05-28 | 2020-09-08 | 深圳壹账通智能科技有限公司 | Image effect adjusting method, device, computer system and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955900A (en) * | 2014-05-07 | 2014-07-30 | 电子科技大学 | Image defogging method based on biological vision mechanism |
CN105825483A (en) * | 2016-03-21 | 2016-08-03 | 电子科技大学 | Haze and dust removing method for image |
CN106127823A (en) * | 2016-06-24 | 2016-11-16 | 电子科技大学 | A kind of coloured image dynamic range compression method |
CN106600547A (en) * | 2016-11-17 | 2017-04-26 | 天津大学 | Underwater image restoration method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150063718A1 (en) * | 2013-08-30 | 2015-03-05 | Qualcomm Incorported | Techniques for enhancing low-light images |
-
2017
- 2017-07-10 CN CN201710573257.XA patent/CN107169942B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955900A (en) * | 2014-05-07 | 2014-07-30 | 电子科技大学 | Image defogging method based on biological vision mechanism |
CN105825483A (en) * | 2016-03-21 | 2016-08-03 | 电子科技大学 | Haze and dust removing method for image |
CN106127823A (en) * | 2016-06-24 | 2016-11-16 | 电子科技大学 | A kind of coloured image dynamic range compression method |
CN106600547A (en) * | 2016-11-17 | 2017-04-26 | 天津大学 | Underwater image restoration method |
Non-Patent Citations (2)
Title |
---|
《基于Contourlet变换和多尺度Rentinex的水下图像增强算法》;石丹; 李庆武; 范新南; 霍冠英;《激光与光电子学进展》;20100410;第47卷(第四期);41-45 * |
Chen-Jui Chung;Wei-Yao Chou;Chia-Wen Lin.《Under-exposed image enhancement using exposure compensation》.《2013 13th International Conference on ITS Telecommunications (ITST)》.2013, * |
Also Published As
Publication number | Publication date |
---|---|
CN107169942A (en) | 2017-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107169942B (en) | Underwater image enhancement method based on fish retina mechanism | |
CN109658341B (en) | Method and device for enhancing image contrast | |
CN105046663B (en) | A kind of adaptive enhancement method of low-illumination image for simulating human visual perception | |
US11625815B2 (en) | Image processor and method | |
CN110197463B (en) | High dynamic range image tone mapping method and system based on deep learning | |
JP2003008988A (en) | Method and apparatus for the removal of flash artifacts | |
CN103295194A (en) | Brightness-controllable and detail-preservation tone mapping method | |
CN108022223B (en) | Tone mapping method based on logarithm mapping function blocking processing fusion | |
CN108416745A (en) | A kind of image adaptive defogging Enhancement Method with color constancy | |
TWI520101B (en) | Method for making up skin tone of a human body in an image, device for making up skin tone of a human body in an image, method for adjusting skin tone luminance of a human body in an image, and device for adjusting skin tone luminance of a human body in | |
CN109274948B (en) | Image color correction method, device, storage medium and computer equipment | |
CN105744118B (en) | A kind of video enhancement method and video enhancement systems based on video frame adaptive | |
CN110675351B (en) | Marine image processing method based on global brightness adaptive equalization | |
CN104091307A (en) | Frog day image rapid restoration method based on feedback mean value filtering | |
CN106485674A (en) | A kind of low light image Enhancement Method based on integration technology | |
CN104021531A (en) | Improved method for enhancing dark environment images on basis of single-scale Retinex | |
CN110009574B (en) | Method for reversely generating high dynamic range image from low dynamic range image | |
Wang et al. | End-to-end exposure fusion using convolutional neural network | |
Liu et al. | Color enhancement using global parameters and local features learning | |
CN110766622A (en) | Underwater image enhancement method based on brightness discrimination and Gamma smoothing | |
CN107358578B (en) | Yin-yang face treatment method and device | |
Chiang et al. | Color image enhancement with saturation adjustment method | |
CN111563854A (en) | Particle swarm optimization method for underwater image enhancement processing | |
CN106327439A (en) | Rapid fog and haze image sharpening method | |
Chiang et al. | Saturation adjustment method based on human vision with YCbCr color model characteristics and luminance changes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |