CN109961415A - A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model - Google Patents
A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model Download PDFInfo
- Publication number
- CN109961415A CN109961415A CN201910232365.XA CN201910232365A CN109961415A CN 109961415 A CN109961415 A CN 109961415A CN 201910232365 A CN201910232365 A CN 201910232365A CN 109961415 A CN109961415 A CN 109961415A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- follows
- adaptive gain
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000003384 imaging method Methods 0.000 title claims abstract description 23
- 238000005286 illumination Methods 0.000 claims abstract description 26
- 230000002708 enhancing effect Effects 0.000 claims abstract description 17
- 239000000284 extract Substances 0.000 claims abstract description 5
- 238000002156 mixing Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000011084 recovery Methods 0.000 claims description 6
- 230000001105 regulatory effect Effects 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000003706 image smoothing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000035699 permeability Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012800 visualization Methods 0.000 claims description 3
- 238000005314 correlation function Methods 0.000 claims description 2
- 238000000926 separation method Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 2
- 239000003086 colorant Substances 0.000 abstract 2
- 230000007704 transition Effects 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000004907 flux Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000031068 symbiosis, encompassing mutualism through parasitism Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of adaptive gain underwater picture Enhancement Methods based on HSI space optics imaging model, comprising: estimation global context illumination vector;Estimate background illumination covering layer vector;It calculates the corresponding denoising of original observed image and restores image;Original color underwater picture is converted into gray level image;To greyscale image transitions at being able to maintain the gray level image of image border, and extract the corresponding gradient image of image;Denoising is restored image to merge with the gray level image of edge feature is kept;Color image converting colors space will be merged, the luminance information of image and color information will be separated;Operation is carried out to luminance information, obtains the enhancing image for enriching gradient information based on original image;Image converting colors space will be enhanced;Quantitative assessment is carried out to enhancing image.The 4 direction Sobel edge detectors used in the present invention can make full use of image itself gradient information abundant to realize image enhancement processing, so that treated, visual quality of images is improved, texture information enriches.
Description
Technical field
The invention belongs to Image Information Processing fields, and in particular to a kind of based on the adaptive of HSI space optics imaging model
Answer gain underwater picture Enhancement Method.
Background technique
Often there are the special circumstances such as non-uniform brightness, low signal-to-noise ratio, low contrast in Underwater Target Detection image, lead to water
Lower image deterioration is serious, and common Underwater Target Detection algorithm for image enhancement is broadly divided into the illumination and inhibition of modification underwater picture
Picture contrast inevitably reduces the visual quality of detection image to retain image border two major classes.It is traditional based on
The algorithm for image enhancement of contrast enhancing has significant limitation, while picture contrast improves, may ignore image
Information loss, and color of image may be brought to be distorted, in some instances it may even be possible to new noise, etc. can be introduced.Due to by underwater complex
The influence of various particles, planktonic organism and water body flow in the optical characteristics and water of environment directly translates and marries again atmosphere and dissipates
The research achievement for penetrating target imaging model carries out undersea detection image enhancement effects and still is apparent not enough.
Summary of the invention
To solve the above-mentioned problems, the invention proposes a kind of adaptive gains based on HSI space optics imaging model
Underwater picture Enhancement Method, comprising the following steps:
Step 1: input degraded image carries out global context illumination vector and estimates according to submarine target optical imagery model
Meter;
Step 2: application mixing median filtering method carries out the estimation of background illumination covering layer vector;
Step 3: it according to submarine target Imaging physics model, calculates the corresponding denoising of original observed image and restores image;
Step 4: the linear transformation of degraded image that step 1 is inputted is converted into gray level image;
Step 5: it to the local Laplace filter of gray level image application of step 4, is converted further into and is able to maintain edge spy
The gray level image of sign, and 4 direction Sobel edge detectors of application extract the corresponding image for having abundant gradient information of image;
Step 6: the denoising of step 3 is restored into image and is merged with the gray level image of edge feature is kept, it is color to obtain fusion
Chromatic graph picture;
Step 7: fusion color image is converted by RGB color to HSI color space, by the luminance information of image
It is separated with color information, the luminance component of the image after being converted;
Step 8: the broad sense based on gradient field adaptive gain is carried out to the luminance information of the image after the conversion of step 7
Bounded logarithmic multiplication operation obtains the enhancing image that gradient information is enriched based on original image;
Step 9: the enhancing image of step 8 is returned into RGB color by HSI color space conversion;
Step 10: output RGB enhances image.
Further, the submarine target optical imagery model in step 1 are as follows:
I (i, j)=t (i, j) J (i, j)+(1-t (i, j)) A
Wherein, I (i, j) and J (i, j) is respectively submarine target corresponding original observation chart at pixel position (i, j)
Picture and denoising restore image;A is global context illumination vector;T (i, j) ∈ [0,1] is medium permeability;T (i, j) J (i, j) is
Directly decay, (1-t (i, j)) A is background illumination covering layer, is expressed as V (i, j);Original observed image I (i, j) indicates are as follows:
Global context illumination vector estimates the step of A are as follows:
1. original observed image is averagely divided into four rectangle subimage blocks with original image size equal proportion;
2. calculating the corresponding pixel mean value of each subimage block difference corresponding with standard deviation;
3. selecting the maximum subimage block of difference, it is further subdivided into four rectangle subimage blocks;
2. 3. 4. step is repeated, until the size of subimage block is less than preset threshold value;
5. selecting minimum range in selected subimage blockCorresponding brightness vector mean value, as
Global context illumination vector estimates A.
Further, step 2 specifically:
If W is the smallest part of color in original observed image I (i, j), it is expressed as W (i, j)=minc(I (i, j));It is mixed
Conjunction median filtering image is M (i, j)=ξ (W (i, j)), and wherein ξ indicates filtering operation;
Background illumination covering layer vector V (i, j) is indicated are as follows:
V (p)=ρ max (min (O, W (p)), 0)
Wherein, O=M (i, j)-ξ | W (i, j)-M (i, j) |, ρ ∈ [0,1] is visualization recovery strength scale factor.
Further, the specific formula that the corresponding denoising of original observed image restores image is calculated in step 3 are as follows:
Further, the specific conversion formula of step 4 are as follows:
Wherein, IRFor the R component of original image I (i, j), IGFor the G component of original image I (i, j), IBFor original image I
The B component of (i, j).
Further, step 5 specifically:
The gray level image that edge is kept indicates are as follows:
Wherein, fLLFIndicate Local Laplacian Filter filter, τ and ν parameter respectively indicates the image smoothing factor
With edge amplitude;
The specific method for obtaining the image of abundant gradient information is that regulating gradient domain adaptive gain function λ (i, j) is equal
Value makes it in suitable range, the formula of λ (i, j) specifically:
Wherein, it is that positive number variable is adjusted that (i, j), which is pixel p, a and b,;gn(i, j) is normalized gradient image, tool
Body surface is shown as:
Wherein, δ1And δ2For small disturbance quantity, to ensure gn(i, j) ∈ (0,1);
Wherein, the gradient image g (i, j) of pixel (i, j) is indicated are as follows:
Wherein, Gk(i, j) is the gradient vector in 4 directions of pixel (i, j), specially;
Wherein, z (i, j) is the gray value of pixel (i, j), and Z (i, j) is indicated are as follows:
Sk(k=1,2,3,4) is the Prewitt edge detector mask definition on 4 directions are as follows:
S1=(- 101;-202;-101)
S2=(012;-101;-2-10)
S3=(121;000;-1-2-1)
S4=(210;10-1;0-1-2).
Further, denoising is restored image in step 6 to merge to obtain fusion coloured silk with the gray level image of edge feature is kept
The specific formula of chromatic graph picture are as follows:
Qc(i, j)=fBlending(G, Ic(i, j))
Wherein, QcTo merge color image, fBlendingIndicate Blending fusion operation.
Further, the specific formula for separating the luminance information of image and color information of step 7 are as follows:
Wherein, In、SnAnd HnTo normalize I, S and H value in HIS space,
QRn、QGnAnd QBnFor the normalized R of RGB image Q,
G and B component.
Further, the broad sense bounded multiplicative operation of step 8 specifically:
Broad sense bounded multiplicative operation is expressed as:
Wherein, In(i, j) is the luminance component that step 7 obtains, I 'n(i, j) is by being based on gradient field adaptive gain
Broad sense bounded logarithmic multiplication operation after the enhancing luminance component image that gradient information is enriched based on original image that obtains.
It further, further include step 9 A between step 9 and step 10, specifically:
Image is enhanced to RGB and carries out quantitative assessment from mean value, contrast, comentropy and color scale etc., is quantitatively commented
The index of correlation function representation of valence are as follows:
Mean value:Wherein, μR、μGAnd μBRespectively RGB triple channel color component is equal
Value;
Contrast:In formula, P (i, j;D, θk) it is ash
Spend symbiosis
Matrix;θkThe angle between pixel, θk=(k-1) × 45 °, k=1,2,3,4;
Comentropy:
Color scale:Wherein, α=R-G, β=(R+G)/2-B;
μα、μβAnd σα、σβIt is the mean value and standard deviation of α, β respectively;
According to a and b in evaluation result regulating step five, and step 5 is repeated to ten, until meeting the phase of quantitative assessment
Close index.
The present invention is achieved to be had the beneficial effect that
Method of the invention can only utilize single width non-uniform brightness, low signal-to-noise ratio, low contrast Underwater Target Detection figure
As the information of itself, the processing of the denoising enhancing based on target imaging model is carried out to image in HSI color space, recycles and keeps
The gray level image of former original feature extracts gradient information abundant and carries out adaptive gain, finally from mean value, contrast, comentropy and
Self-adaption gradient gain suppression image of the comprehensive quantitative evaluations such as the color scale index evaluation based on target imaging model.The present invention
In the 4 direction Sobel edge detectors used, image itself gradient information abundant can be made full use of to realize at image enhancement
Reason, so that treated, visual quality of images is improved, texture information enriches.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention
Technical solution, and not intended to limit the protection scope of the present invention.
Shown in referring to Fig.1, the present invention is a kind of adaptive gain underwater picture increasing based on HSI space optics imaging model
Strong method, overall flow figure is as shown in Figure 1, the specific implementation steps are as follows:
Step 1: input degraded image carries out global context illumination vector and estimates according to submarine target optical imagery model
Meter.
Submarine target optical imagery model can state are as follows:
I (i, j)=t (i, j) J (i, j)+(1-t (i, j)) A
Wherein, I (i, j) and J (i, j) is respectively submarine target corresponding original observation chart at pixel position (i, j)
Picture and recovery image;A is global context illumination vector;T (i, j) ∈ [0,1] is medium permeability, indicate the luminous flux of light with
The percentage of its incident flux.Transmissivity t (i, j)=e-θd(p), both sides factor is depended on: first is that target and camera mirror
The distance between head d (p), second is that attenuation coefficient θ.
In formula (1), t (i, j) J (i, j) is directly to decay, and (1-t (i, j)) A is background illumination covering layer, is expressed as V
(i, j).The purpose of image denoising is exactly to be calculated to restore image J (i, j) by original observed image I (i, j).Calculate recovery
Image J (i, j), it is necessary to first calculate global context illumination vector A and background illumination covering layer V (i, j).Original observed image I (i,
J) it can indicate are as follows:
Global context illumination vector estimates step A:
1. original observed image is averagely divided into four rectangle subimage blocks with original image size equal proportion;
2. calculating the corresponding pixel mean value of each subimage block difference corresponding with standard deviation;
3. selecting the maximum subimage block of difference, it is further subdivided into four rectangle subimage blocks;
2. 3. 4. step is repeated, until the size of subimage block is less than preset threshold value;
5. selecting minimum range in selected subimage blockCorresponding brightness vector mean value, as complete
Office's background illumination vector estimates A.WhereinFor the average value of original observed image RGB triple channel.
Step 2: application mixing median filtering method carries out the estimation of background illumination covering layer vector.
If W is the smallest part of color in original observed image I (i, j), it is expressed as W (i, j)=minc(I (i, j));It is mixed
Conjunction median filtering image is M (i, j)=ξ (W (i, j)), and wherein ξ indicates that filtering operation, c indicate RGB color channel.Background illumination
Covering layer vector V (i, j) can be indicated are as follows:
V (i, j)=ρ max (min (O, W (i, j)), 0)
Wherein, O=M (i, j)-ξ | W (i, j)-M (i, j) |, ρ ∈ [0,1] is visualization recovery strength scale factor.
Step 3: it according to submarine target Imaging physics model, calculates the corresponding denoising of original observed image I (i, j) and restores
Image J (i, j).
Step 4: the linear transformation of degraded image that step 1 is inputted is converted into gray level image
Wherein, IRFor the R component of original image I (i, j), IGFor the G component of original image I (i, j), IBFor original image I
The B component of (i, j).
Step 5: to the local Laplace filter of gray level image application, it is converted further into the gray scale for being able to maintain image border
Image, and 4 direction Sobel edge detectors of application extract the corresponding gradient image of image.
The gray level image that edge is kept can indicate are as follows:
Wherein, fLLFIndicate Local Laplacian Filter filter, τ and v parameter respectively indicates the image smoothing factor
With edge amplitude
Sobel edge detector mask definition on four direction are as follows:
S1=(- 101;-202;-101)
S2=(012;-101;-2-10)
S3=(121;000;-1-2-1)
S4=(210;10-1;0-1-2)
Assuming that Z (i, j) is defined as 3 × 3 Image neighborhoods of pixel (i, j), then Z (i, j) can be indicated are as follows:
Wherein, z (i, j) is defined as the gray value of pixel (i, j).
The gradient vector in 4 directions of pixel (i, j) can be with is defined as:
The gradient image g (i, j) of pixel (i, j) can be with is defined as:
Normalized gradient image gn(i, j) are as follows:
Wherein, δ1And δ2For small disturbance quantity, to ensure gn(i, j) ∈ (0,1).
Enrich the image of gradient information in order to obtain, adaptive gain function λ (i, j) at pixel (i, j) can be with
Statement are as follows:
Wherein, a and b is that positive number variable is adjusted, to ensure that adaptive gain function λ (i, j) mean value is suitable at one
In range.
Step 6: denoising is restored into image and is merged with the gray level image of edge feature is kept, fusion color image Q is obtainedc。
Qc(i, j)=fBlending(G, Ic(i, j))
Wherein, fBlendingIndicate Blending fusion operation, QcAnd IcRespectively indicate denoising recovery figure in RGB color
Picture and blending image Jing Guo pixel fusion.
Step 7: color image Q will be mergedcIt is converted by RGB color to HSI color space, the brightness of image is believed
Breath is separated with color information.
Wherein,QRn、QGnAnd QBnFor RGB image Q normalizing
R, G and B component of change, In、SnAnd HnTo normalize I, S and H value in HIS space.
Step 8: the fortune of the broad sense bounded logarithmic multiplication based on gradient field adaptive gain is carried out to the luminance information of image
It calculates, obtains the enhancing image for enriching gradient information based on original image.
The luminance component I that step 7 is obtainedn(i, j) carries out the broad sense bounded logarithmic multiplication based on self-adaption gradient gain
Operation obtains the enhancing luminance component image I ' that gradient information is enriched based on original imagen(i, j).
The operation of broad sense bounded multiplicative can indicate are as follows:
Step 9: enhancing image by HSI color space conversion is returned into RGB color, is shown and image convenient for image
Quality analysis.
Step 10: quantitative assessment is carried out from mean value, contrast, comentropy and color scale etc. to enhancing image.
Related quantitative assessing index function representation are as follows:
Mean value:Wherein, μR、μGAnd μBRespectively RGB triple channel color component is equal
Value.
Contrast:In formula, P (i, j;D, θk) it is ash
Spend co-occurrence matrix;θkThe angle between pixel, θk=(k-1) × 45 °, k=1,2,3,4d be the distance between pixel (d=1), under
Together.
Comentropy:
Color scale:Wherein, α=R-G, β=(R+G)/2-B;μα、μβ
And σα、σβIt is the mean value and standard deviation of α, β respectively, R, G, B respectively indicate the value in RGB color channel.
According to a and b in evaluation result regulating step five, and step 5 is repeated to ten, until meeting the phase of quantitative assessment
Close index.
To the adaptive gain underwater picture Enhancement Method relevant issues explanation based on HSI space optics imaging model:
(1) luminance component in HSI color space is three Color Channel mean values in RGB color, therefore to making an uproar
Acoustical signal is insensitive.
(2) to the local Laplace filter of gray level image application, the smoothness and edge amplitude of gray level image be can control, sufficiently
Keep the local edge of gray level image.
(3) denoising is restored image to merge with the gray level image of edge feature is kept, obtained color image is on the one hand rich
Rich image detail feature, on the other hand more faithful to original image.
(4) in practical applications, the needs enhanced according to contrast, can be to the ginseng in adaptive gain function λ (i, j)
Number a and b is adjusted, and obtains the enhancing image of different contrast.
(5) when degree of comparing enhances, the factors such as image information entropy, color scale should also be comprehensively considered, to realize figure
As the promotion of whole visual effect.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model, which is characterized in that including
Following steps:
Step 1: input degraded image carries out the estimation of global context illumination vector according to submarine target optical imagery model;
Step 2: application mixing median filtering method carries out the estimation of background illumination covering layer vector;
Step 3: it according to submarine target Imaging physics model, calculates the corresponding denoising of original observed image and restores image;
Step 4: the linear transformation of degraded image that step 1 is inputted is converted into gray level image;
Step 5: it to the local Laplace filter of gray level image application of step 4, is converted further into and is able to maintain edge feature
Gray level image, and 4 direction Sobel edge detectors of application extract the corresponding image for having abundant gradient information of image;
Step 6: the denoising of step 3 is restored into image and is merged with the gray level image of edge feature is kept, fusion cromogram is obtained
Picture;
Step 7: fusion color image is converted by RGB color to HSI color space, by the luminance information and color of image
Multimedia message separation, the luminance component of the image after being converted;
Step 8: the broad sense bounded based on gradient field adaptive gain is carried out to the luminance information of the image after the conversion of step 7
Logarithmic multiplication operation obtains the enhancing image that gradient information is enriched based on original image;
Step 9: the enhancing image of step 8 is returned into RGB color by HSI color space conversion;
Step 10: output RGB enhances image.
2. the adaptive gain underwater picture Enhancement Method according to claim 1 based on HSI space optics imaging model,
It is characterized in that, the submarine target optical imagery model in step 1 are as follows:
I (i, j)=t (i, j) J (i, j)+(1-t (i, j)) A
Wherein, I (i, j) and J (i, j) be respectively submarine target at pixel position (i, j) corresponding original observed image and
Denoising restores image;A is global context illumination vector;T (i, j) ∈ [0,1] is medium permeability;T (i, j) J (i, j) is direct
Decaying, (1-t (i, j)) A are background illumination covering layer, are expressed as V (i, j);Original observed image I (i, j) indicates are as follows:
Global context illumination vector estimates the step of A are as follows:
1. original observed image is averagely divided into four rectangle subimage blocks with original image size equal proportion;
2. calculating the corresponding pixel mean value of each subimage block difference corresponding with standard deviation;
3. selecting the maximum subimage block of difference, it is further subdivided into four rectangle subimage blocks;
2. 3. 4. step is repeated, until the size of subimage block is less than preset threshold value;
5. selecting minimum range in selected subimage blockCorresponding brightness vector mean value is carried on the back as the overall situation
Scape illumination vector estimates A.
3. the adaptive gain underwater picture Enhancement Method according to claim 2 based on HSI space optics imaging model,
It is characterized in that, step 2 specifically:
If W is the smallest part of color in original observed image I (i, j), it is expressed as W (i, j)=minc(I (i, j));In mixing
Value filtering image is M (i, j)=ξ (W (i, j)), and wherein ξ indicates filtering operation;
Background illumination covering layer vector V (i, j) is indicated are as follows:
V (p)=ρ max (min (O, W (p)), 0)
Wherein, O=M (i, j)-ξ | W (i, j)-M (i, j) |, ρ ∈ [0,1] is visualization recovery strength scale factor.
4. the adaptive gain underwater picture Enhancement Method according to claim 3 based on HSI space optics imaging model,
It is characterized in that, calculating the specific formula that the corresponding denoising of original observed image restores image in step 3 are as follows:
5. the adaptive gain underwater picture Enhancement Method according to claim 4 based on HSI space optics imaging model,
It is characterized in that, the specific conversion formula of step 4 are as follows:
Wherein, IR is the R component of original image I (i, j), IGFor the G component of original image I (i, j), IBFor original image I (i,
J) B component.
6. the adaptive gain underwater picture Enhancement Method according to claim 5 based on HSI space optics imaging model,
It is characterized in that, step 5 specifically:
The gray level image that edge is kept indicates are as follows:
Wherein, fLLFIndicate Local Laplacian Filter filter, τ and v parameter respectively indicates the image smoothing factor and side
Edge amplitude;
The specific method for obtaining the image of abundant gradient information is that regulating gradient domain adaptive gain function λ (i, j) mean value makes
It is in suitable range, the formula of λ (i, j) specifically:
Wherein, it is that positive number variable is adjusted that (i, j), which is pixel p, a and b,;gn(i, j) is normalized gradient image, specific table
It is shown as:
Wherein, δ1And δ2For small disturbance quantity, to ensure gn(i, j) ∈ (0,1);
Wherein, the gradient image g (i, j) of pixel (i, j) is indicated are as follows:
Wherein, Gk(i, j) is the gradient vector in 4 directions of pixel (i, j), specially;
Wherein, z (i, j) is the gray value of pixel (i, j), and Z (i, j) is indicated are as follows:
Sk(k=1,2,3,4) is the Prewitt edge detector mask definition on 4 directions are as follows:
S1=(- 101;-2 0 2;-1 0 1)
S2=(0 12;-1 0 1;-2 -1 0)
S3=(1 21;0 0 0;-1 -2 -1)
S4=(2 10;1 0 -1;0 -1 -2).
7. the adaptive gain underwater picture Enhancement Method according to claim 6 based on HSI space optics imaging model,
It merges to obtain fusion colour with the gray level image of edge feature is kept it is characterized in that, denoising is restored image in the step 6
The specific formula of image are as follows:
Qc(i, j)=fBlending(G, Ic(i, j))
Wherein, QcTo merge color image, fBlendingIndicate Blending fusion operation.
8. the adaptive gain underwater picture Enhancement Method according to claim 7 based on HSI space optics imaging model,
It is characterized in that, the specific formula for separating the luminance information of image and color information of the step 7 are as follows:
Wherein, In、SnAnd HnTo normalize I, S and H value in HIS space,
QRn、QGnAnd QBnFor RGB image Q normalized R, G and B
Component.
9. the adaptive gain underwater picture Enhancement Method according to claim 8 based on HSI space optics imaging model,
It is characterized in that, the broad sense bounded multiplicative operation of the step 8 specifically:
Broad sense bounded multiplicative operation is expressed as:
Wherein, In(i, j) is the luminance component that step 7 obtains, I 'n(i, j) is by based on the wide of gradient field adaptive gain
What is obtained after adopted bounded logarithmic multiplication operation enriches the enhancing luminance component image of gradient information based on original image.
10. the adaptive gain underwater picture enhancing side according to claim 9 based on HSI space optics imaging model
Method, which is characterized in that it further include step 9 A between the step 9 and step 10, specifically:
Image is enhanced to RGB and carries out quantitative assessment from mean value, contrast, comentropy and color scale etc., quantitative assessment
Index of correlation function representation are as follows:
Mean value:Wherein, μR、μGAnd μBThe respectively mean value of RGB triple channel color component;
Contrast:In formula, P (i, j;D, θk) be total to for gray scale
It is raw
Matrix;θkThe angle between pixel, θk=(k-1) × 45 °, k=1,2,3,4;
Comentropy:
Color scale:Wherein, α=R-G, β=(R+G)/2-B;
μα、μβAnd σα、σβIt is the mean value and standard deviation of α, β respectively;
According to a and b in evaluation result regulating step five, and step 5 is repeated to ten, until the correlation for meeting quantitative assessment refers to
Mark.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910232365.XA CN109961415A (en) | 2019-03-26 | 2019-03-26 | A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910232365.XA CN109961415A (en) | 2019-03-26 | 2019-03-26 | A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109961415A true CN109961415A (en) | 2019-07-02 |
Family
ID=67024966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910232365.XA Withdrawn CN109961415A (en) | 2019-03-26 | 2019-03-26 | A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961415A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533614A (en) * | 2019-08-28 | 2019-12-03 | 哈尔滨工程大学 | A kind of underwater picture Enhancement Method of combination frequency domain and airspace |
CN111223060A (en) * | 2020-01-05 | 2020-06-02 | 西安电子科技大学 | Image processing method based on self-adaptive PLIP model |
CN114202491A (en) * | 2021-12-08 | 2022-03-18 | 深圳市研润科技有限公司 | Method and system for enhancing optical image |
CN114612340A (en) * | 2022-03-25 | 2022-06-10 | 郑骐骥 | Image data denoising method and system based on step-by-step contrast enhancement |
CN114972346A (en) * | 2022-07-29 | 2022-08-30 | 山东通达盛石材有限公司 | Stone identification method based on computer vision |
-
2019
- 2019-03-26 CN CN201910232365.XA patent/CN109961415A/en not_active Withdrawn
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533614A (en) * | 2019-08-28 | 2019-12-03 | 哈尔滨工程大学 | A kind of underwater picture Enhancement Method of combination frequency domain and airspace |
CN111223060A (en) * | 2020-01-05 | 2020-06-02 | 西安电子科技大学 | Image processing method based on self-adaptive PLIP model |
CN111223060B (en) * | 2020-01-05 | 2021-01-05 | 西安电子科技大学 | Image processing method based on self-adaptive PLIP model |
CN114202491A (en) * | 2021-12-08 | 2022-03-18 | 深圳市研润科技有限公司 | Method and system for enhancing optical image |
CN114612340A (en) * | 2022-03-25 | 2022-06-10 | 郑骐骥 | Image data denoising method and system based on step-by-step contrast enhancement |
CN114972346A (en) * | 2022-07-29 | 2022-08-30 | 山东通达盛石材有限公司 | Stone identification method based on computer vision |
CN114972346B (en) * | 2022-07-29 | 2022-11-04 | 山东通达盛石材有限公司 | Stone identification method based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961415A (en) | A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model | |
Wang et al. | Underwater image restoration via maximum attenuation identification | |
Fattal | Single image dehazing | |
CN109949247A (en) | A kind of gradient field adaptive gain underwater picture Enhancement Method based on YIQ space optics imaging model | |
CN112288658A (en) | Underwater image enhancement method based on multi-residual joint learning | |
Tripathi et al. | Removal of fog from images: A review | |
US9514558B2 (en) | Method for preventing selected pixels in a background image from showing through corresponding pixels in a transparency layer | |
Singh et al. | Single image defogging by gain gradient image filter | |
Gao et al. | Detail preserved single image dehazing algorithm based on airlight refinement | |
CN107895357B (en) | A kind of real-time water surface thick fog scene image Enhancement Method based on FPGA | |
US9288462B2 (en) | Conversion of an image to a transparency retaining readability and clarity of detail while automatically maintaining color information of broad areas | |
CN106971379A (en) | A kind of underwater picture Enhancement Method merged based on stratified calculation | |
CN110135434A (en) | Underwater picture increased quality algorithm based on color line model | |
CN110675351B (en) | Marine image processing method based on global brightness adaptive equalization | |
Yang et al. | Underwater image enhancement using scene depth-based adaptive background light estimation and dark channel prior algorithms | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
CN104637036A (en) | Chinese ancient painting enhancing method | |
Verma et al. | Systematic review and analysis on underwater image enhancement methods, datasets, and evaluation metrics | |
CN111667446B (en) | image processing method | |
Sheu et al. | FIBS-Unet: feature integration and block smoothing network for single image dehazing | |
Block et al. | Automatic underwater image enhancement using improved dark channel prior | |
Xiaoxu et al. | Image dehazing base on two-peak channel prior | |
CN113284058B (en) | Underwater image enhancement method based on migration theory | |
CN106846260A (en) | Video defogging method in a kind of computer | |
Powar et al. | A review: Underwater image enhancement using dark channel prior with gamma correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190702 |
|
WW01 | Invention patent application withdrawn after publication |