CN112529824A - Image fusion method based on self-adaptive structure decomposition - Google Patents
Image fusion method based on self-adaptive structure decomposition Download PDFInfo
- Publication number
- CN112529824A CN112529824A CN202011305345.XA CN202011305345A CN112529824A CN 112529824 A CN112529824 A CN 112529824A CN 202011305345 A CN202011305345 A CN 202011305345A CN 112529824 A CN112529824 A CN 112529824A
- Authority
- CN
- China
- Prior art keywords
- image
- texture
- saturation
- decomposition
- fusion method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image fusion method based on self-adaptive structure decomposition, which comprises the following steps: inputting an image to be processed, and acquiring a group of image sequences of the image to be processed under different exposure levels through gamma correction; aiming at each image in the image sequence, acquiring the saturation of each pixel point in the image, and performing linear adjustment on the saturation to obtain an adjusted image sequence; based on the adjusted image sequence, obtaining a cartoon component of each image in the image sequence, and separating texture components from corresponding images according to the cartoon components; acquiring a corresponding texture entropy value based on the texture component of each image, and selecting an optimal image block according to the texture entropy value of each image; fusing the optimal image blocks of all the images and outputting fused images; the invention can effectively improve the processing effect of the local details of the image, and ensure the quality of the fused image through the image area fusion with the best visual quality.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image fusion method based on self-adaptive structure decomposition.
Background
As a practical challenge, the visual quality of outdoor images is seriously affected by interferences such as haze, dirt, overcast and rainy days, partial obstruction, etc., and image denoising technology is always used to remove such interference information from captured images. The existing image denoising algorithm enhances the contrast and saturation of a global image through histogram equalization, a Retinex algorithm and wavelet transformation so as to improve the visual quality of the image, but the global enhancement does not necessarily ensure local enhancement, some pixels may be omitted in the calculation process of the global enhancement, so that the image often does not have good expression on the visual quality of local details.
Disclosure of Invention
In view of the problems in the prior art, the invention provides an image fusion method based on adaptive structure decomposition, which mainly solves the problem that the existing image denoising method is poor in local detail processing effect.
In order to achieve the above and other objects, the present invention adopts the following technical solutions.
An image fusion method based on adaptive structure decomposition comprises the following steps:
inputting an image to be processed, and acquiring a group of image sequences of the image to be processed under different exposure levels through gamma correction;
aiming at each image in the image sequence, acquiring the saturation of each pixel point in the image, and performing linear adjustment on the saturation to obtain an adjusted image sequence;
based on the adjusted image sequence, obtaining a cartoon component of each image in the image sequence, and separating texture components from corresponding images according to the cartoon components;
acquiring a corresponding texture entropy value based on the texture component of each image, and selecting an optimal image block according to the texture entropy value of each image; and performing fusion processing on the optimal image blocks of all the images, and outputting a fused image.
Optionally, acquiring a set of image sequences of the image to be processed at different exposure levels through gamma correction includes:
after gamma correction, the contrast change trend of the image is judged, and if the contrast is reduced, the exposure of the image is reduced.
Optionally, a saturation adjustment range is set, the same saturation adjustment is performed on the RGB three channels of each pixel, and saturation normalization is performed according to a specific saturation increment.
Optionally, a variation image decomposition model is used to perform cartoon texture decomposition on the adjusted image sequence, and a cartoon component of each image in the image sequence is obtained.
Optionally, after discretizing the cartoon component, extracting a texture component according to the image and the obtained corresponding cartoon component.
Optionally, after extracting the texture component of the image, obtaining a texture entropy value of a texture feature corresponding to the texture component by using a gray difference statistical method, including:
assuming that the gray level difference has m possible values, allowing the pixel points to move on the whole image to obtain a gray level histogram;
the texture entropy value is expressed as:
where p (i) represents the probability of each gray level difference obtained from the statistics of the gray level histogram.
Optionally, the fusing processing is performed on the optimal image blocks of all the images, and includes:
determining the size of an image area for pasting according to the texture entropy value, and selecting the optimal size to obtain the optimal image block;
determining a decomposition step length according to the optimal image block, and decomposing a corresponding image into a plurality of image blocks with the same size according to the decomposition step length;
further decomposing each image block into three components, and processing the three components respectively to obtain the fused image block; wherein the three components include contrast, signal structure, average intensity.
Optionally, verifying the fused image block, and outputting the fused image when a preset condition is met, where the method includes:
performing gamma correction on the fused image block, and judging whether the fused image meets the preset conditions of intensity and saturation; and/or the presence of a gas in the gas,
and judging whether the average intensity component of the fused image block meets a preset intensity condition or not.
Optionally, the fused image is represented as follows:
wherein the content of the first and second substances,represents the highest contrast at the same spatial location;representing the average intensity of the image block;representing the expected value of the signal structure.
As described above, the image fusion method based on adaptive structure decomposition according to the present invention has the following advantages.
The exposure of the blurred image is reduced through a series of gamma corrections, so that the contrast of local details is improved, the image saturation is enhanced, a group of image sequences are obtained, image areas with the best image quality are selected from the image sequences for fusion, and the processing effect of the local details of the image can be effectively improved.
Drawings
FIG. 1 is a flowchart of an image fusion method based on adaptive structure decomposition according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to FIG. 1, the present invention provides an image fusion method based on adaptive structure decomposition, which includes steps S01-S04.
In step S01, an image to be processed is input, and a set of image sequences of the image to be processed at different exposure levels is acquired through gamma correction;
in one embodiment, the gamma correction transformation can be expressed as:
wherein, I (x) is an image to be processed; alpha is a constant; for gamma<1, brighter intensities in the image are compressed, while darker intensities are expanded. For gamma>1, the brighter intensities in the image are distributed over a wider range after transformation, while the darker intensities are mapped to the compression interval. Given an image region Ω, its contrast can be described by the following equation:
the change trend of the contrast of the image after gamma correction is judged based on the calculation formula of the contrast, specifically, when gamma is larger than 1, for the image containing a bright area, the contrast is reduced after gamma correction, and the local contrast of a fuzzy area is improved by reducing the exposure of a given image. After contrast adjustment, a group of image sequences corresponding to the images to be processed under different exposure levels can be obtained.
In step S02, for each image in the image sequence, obtaining a saturation of each pixel point in the image, and performing linear adjustment on the saturation to obtain an adjusted image sequence;
in one embodiment, for each image I (x) to be processed, a set of under-exposed image sequences E is extracted using gamma correctionγ=[I1(x),I2(x),…,Ik(x)]。
For each image in the underexposed image sequenceThe maximum and minimum R, G, B components of each pixel are calculated, and the saturation S of each pixel is obtained by formula calculation.
The maximum value and the minimum value of the R, G, B components of each pixel are respectively calculated by the following formula.
rgbmax=max(max(R,G),B)
rgbmin=min(min(R,G),B)
The saturation S of each pixel is calculated according to the following formula
Wherein value ═ (rgb)max+rgbmin) And/255 and L equal to value/2. After the saturation of each pixel is obtained, the same adjustment operation is performed for three channels of RGB by the following formula, and the adjustment range of the saturation is [1,100 ] for each image]Normalization is performed at specific saturation increments. The specific calculation process is as follows:
E′k(x)=Ek(x)+(Ek(x)-L×255)×α
whereinIndicating the image after saturation adjustment, and percentageThe set saturation increment.
In step S03, based on the adjusted image sequence, a cartoon component of each image in the image sequence is obtained, and a texture component is separated from the corresponding image according to the cartoon component;
in one embodiment, the saturation adjusted image sequence is used for further image cartoon texture decomposition. Specifically, a variation image decomposition model veseosher (vo) can be used to perform cartoon decomposition on each image at different exposure levels. The VO model is defined as follows:
wherein f is the original image, u is the cartoon component of the original image, and g is the Banach space.
Further, by minimizing the image cartoon component u in the VO model, the euler-lagrange equation is obtained:
in an embodiment, a semi-implicit finite difference iterative algorithm can be used to discretize u in the upper time to obtain a cartoon component of the image. The texture component in the image can be separated by using a cartoon texture expression formula, and a specific expression can be expressed as follows:
v=f-u
the cartoon component u has a smooth area and a clear edge, and v is noise or fine texture.
In step S04, obtaining a corresponding texture entropy value based on the texture component of each image, and selecting an optimal image block according to the texture entropy value of each image; and performing fusion processing on the optimal image blocks of all the images, and outputting a fused image.
In one embodiment, after the image texture component is obtained, an entropy value of the image texture feature is calculated by using a gray difference statistical method. Adaptive selection of image tile sizes is then implemented.
Specifically, the image texture component is converted into a grayscale image, where (x, y) represents a point in the image, and the point with a shorter distance from (x, y) is denoted as (x + Δ x, y + Δ y), and its grayscale difference value can be expressed as the following formula, resulting in a grayscale differential image.
gΔ(x,y)=g(x,y)-g(x+Δx,y+Δy)
In the formula, gΔIs the gray scale difference. Let (x, y) move over the entire image, resulting in a grayscale differential image.
And calculating the entropy value of the image texture characteristic on the assumption that all possible gray level difference values have m levels. Let (x, y) move over the whole image, calculate gΔThe number of times of each value of (1) to obtain gΔA histogram of (a). p is a radical ofiIs the probability value of each gray level difference obtained by histogram statistics. The entropy of the image texture is given by:
after all input images are processed in the way, the entropy values of all image texture features obtained by the algorithm are { ent }1,ent2,…,entnWhere n denotes the number of input images.
In one embodiment, an adaptive tile selection algorithm is applied to obtain fitted texture entropy values and image region sizes for tiles, using empirical formulas to obtain the optimal image block size.
Specifically, the size of the image block is adaptively selected according to the entropy value of the image texture feature. Based on the gray level co-occurrence matrix, the obtained texture entropy value can reflect the roughness of the image texture to a great extent. The smaller the entropy value, the finer the texture, and conversely, the larger the entropy value, the coarser the texture. The optimal block size of an image is closely related to the roughness of the image texture. When the texture of the image becomes coarse, the size of a larger image block should be selected when the image decomposition is carried out, so as to ensure the good performance of the texture structure on the decomposition component. On the contrary, when the image texture becomes thinner, a smaller image block size needs to be selected to achieve a better texture synthesis effect. And characterizing the roughness of the image texture according to the entropy value of the image texture, and automatically selecting the optimal block size of the image. When the entropy of the image texture is small, the texture is fine, and a larger image block size should be selected. When the entropy of the texture of the image is larger, the texture is rough, and a smaller image block size should be selected. And for each group of texture images, adjusting the block size of the images from large to small, and comparing the fusion results to find a reasonable parameter range of the image block size. The abscissa represents the entropy of different texture images arranged from small to large, and the ordinate represents the optimal image block parameter of the corresponding texture image. With the increase of the texture entropy value of the image, the optimal block size parameter of the image is reduced, and a hyperbolic function can be used for fitting. The empirical formula for the optimal image block size is shown below.
Wherein ENT is the mean value of the entropy value of the image texture,pSize is the preset image plate decomposition size, pSize ═ 21. The corresponding best match block size is wSize × wSize.
Decomposing each group of multi-exposure source images into n wSize x wSize image blocks by using the step size D to obtain a color image block sequence { x ] of each groupk,iAnd |1 is not less than K and not more than K,1 is not less than i and not more than n, wherein K is the number of input images, and n is the number of blocks of each image. x is the number ofk,iIs the ith image block of the kth image, and then each image block x is divided by a structural decomposition algorithmk,iDecomposed into contrast ck,iSignal structure sk,iAverage intensity lk,i. And processing the contrast and intensity characteristics of the three components to obtain a fused image. And processing the contrast and the average intensity component to obtain the fused image block.
During contrast processing, selecting image block set composed of image blocks of each image in the same spatial position in the image sequenceThe image block with the highest contrast is obtained, and the corresponding contrast is recorded as
For the signal structure component, the image block contrast is used as a weight, and the weighted average processing is performed on the input structure vector, so that the expected signal structure of the fused image block can be expressed as the following formula.
Wherein the weight functionAnd taking the contrast of the image blocks as weight input, and measuring the contribution of each image block at the same spatial position in the fusion structure vector of the image blocks.
For the average intensity component, use the current image E'kGlobal mean intensity value mu ofkAnd a current image block xk,iLocal mean intensity value ofk,iAs an input. The exposure quality of an image block and a corresponding image in the sequence of images is measured using a two-dimensional gaussian function, as shown below.
Wherein sigmagAnd σlRespectively, the Gaussian standard deviation and the radial dimension mu of the constructed two-dimensional Gaussian functionkAnd lk,iIs expanded.
Once the cover is closedBy calculation, a new vector can be uniquely defined by the following equation.Representing the fused image block.
And extracting image blocks by using a sliding window, and averaging pixels in the overlapped blocks to obtain a final output result.
In an embodiment, verifying the fused image block, and outputting the fused image when a preset condition is met, includes:
performing gamma correction on the fused image block, and judging whether the fused image meets the preset intensity and saturation conditions; and/or the presence of a gas in the gas,
and judging whether the average intensity component of the fused image block meets a preset intensity condition or not.
In summary, the image fusion method based on adaptive structural decomposition of the present invention reduces exposure of blurred images through a series of gamma corrections to improve image contrast of local details; the color saturation is enhanced by increasing the spatial linear saturation to obtain a group of underexposed image sequences with enhanced contrast and saturation, and different from the existing denoising method based on pixel point fusion, the solution adopts an image fusion method based on a patch adaptive structure (PAD) to store image structure information. In order to obtain haze-free images, regions with the best visual quality are acquired from each image for image fusion. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.
Claims (9)
1. An image fusion method based on adaptive structure decomposition is characterized by comprising the following steps:
inputting an image to be processed, and acquiring a group of image sequences of the image to be processed under different exposure levels through gamma correction;
aiming at each image in the image sequence, acquiring the saturation of each pixel point in the image, and performing linear adjustment on the saturation to obtain an adjusted image sequence;
based on the adjusted image sequence, obtaining a cartoon component of each image in the image sequence, and separating texture components from corresponding images according to the cartoon components;
acquiring a corresponding texture entropy value based on the texture component of each image, and selecting an optimal image block according to the texture entropy value of each image; and performing fusion processing on the optimal image blocks of all the images, and outputting a fused image.
2. The adaptive structural decomposition-based image fusion method according to claim 1, wherein acquiring a set of image sequences of the image to be processed at different exposure levels through gamma correction comprises:
after gamma correction, the contrast change trend of the image is judged, and if the contrast is reduced, the exposure of the image is reduced.
3. The image fusion method based on adaptive structural decomposition according to claim 1, wherein a saturation adjustment range is set, the same saturation adjustment is performed on the RGB three channels of each pixel, and saturation normalization is performed according to a specific saturation increment.
4. The image fusion method based on adaptive structural decomposition according to claim 1, wherein a variation image decomposition model is used to perform cartoon texture decomposition on the adjusted image sequence to obtain a cartoon component of each image in the image sequence.
5. The image fusion method based on adaptive structure decomposition according to claim 4, wherein after discretization processing is performed on the cartoon components, texture components are extracted according to the images and the obtained corresponding cartoon components.
6. The image fusion method based on adaptive structure decomposition according to claim 5, wherein after extracting the texture component of the image, obtaining the texture entropy value of the texture feature corresponding to the texture component by using a gray difference statistical method, comprises:
assuming that the gray level difference has m possible values, allowing the pixel points to move on the whole image to obtain a gray level histogram;
the texture entropy value is expressed as:
where p (i) represents the probability of each gray level difference obtained from the statistics of the gray level histogram.
7. The image fusion method based on adaptive structure decomposition according to claim 1, wherein the fusion processing of the best image blocks of all images comprises:
determining the size of an image area for pasting according to the texture entropy value, and selecting the optimal size to obtain the optimal image block;
determining a decomposition step length according to the optimal image block, and decomposing a corresponding image into a plurality of image blocks with the same size according to the decomposition step length;
further decomposing each image block into three components, and processing the three components respectively to obtain the fused image block; wherein the three components include contrast, signal structure, average intensity.
8. The image fusion method based on adaptive structure decomposition according to claim 7, wherein verifying the fused image block, and outputting the fused image when a preset condition is satisfied, comprises:
performing gamma correction on the fused image block, and judging whether the fused image meets the preset conditions of intensity and saturation; and/or the presence of a gas in the gas,
and judging whether the average intensity component of the fused image block meets a preset intensity condition or not.
9. The adaptive structure decomposition based image fusion method according to claim 7, wherein the fused image is represented as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011305345.XA CN112529824A (en) | 2020-11-19 | 2020-11-19 | Image fusion method based on self-adaptive structure decomposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011305345.XA CN112529824A (en) | 2020-11-19 | 2020-11-19 | Image fusion method based on self-adaptive structure decomposition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112529824A true CN112529824A (en) | 2021-03-19 |
Family
ID=74982632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011305345.XA Pending CN112529824A (en) | 2020-11-19 | 2020-11-19 | Image fusion method based on self-adaptive structure decomposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112529824A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
CN109671044A (en) * | 2018-12-04 | 2019-04-23 | 重庆邮电大学 | A kind of more exposure image fusion methods decomposed based on variable image |
CN110378859A (en) * | 2019-07-29 | 2019-10-25 | 西南科技大学 | A kind of new high dynamic range images generation method |
CN110599415A (en) * | 2019-08-29 | 2019-12-20 | 西安电子科技大学 | Image contrast enhancement implementation method based on local adaptive gamma correction |
-
2020
- 2020-11-19 CN CN202011305345.XA patent/CN112529824A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
CN109671044A (en) * | 2018-12-04 | 2019-04-23 | 重庆邮电大学 | A kind of more exposure image fusion methods decomposed based on variable image |
CN110378859A (en) * | 2019-07-29 | 2019-10-25 | 西南科技大学 | A kind of new high dynamic range images generation method |
CN110599415A (en) * | 2019-08-29 | 2019-12-20 | 西安电子科技大学 | Image contrast enhancement implementation method based on local adaptive gamma correction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Senthilkumaran et al. | Image segmentation by using thresholding techniques for medical images | |
US20170365046A1 (en) | Algorithm and device for image processing | |
Veluchamy et al. | Fuzzy dissimilarity color histogram equalization for contrast enhancement and color correction | |
Gupta et al. | Minimum mean brightness error contrast enhancement of color images using adaptive gamma correction with color preserving framework | |
CN108389175B (en) | Image defogging method integrating variation function and color attenuation prior | |
CN109345491B (en) | Remote sensing image enhancement method fusing gradient and gray scale information | |
US20040042676A1 (en) | Method and apparatus for illumination compensation of digital images | |
CN109919859B (en) | Outdoor scene image defogging enhancement method, computing device and storage medium thereof | |
CN114118144A (en) | Anti-interference accurate aerial remote sensing image shadow detection method | |
CN114331873B (en) | Non-uniform illumination color image correction method based on region division | |
CN111968041A (en) | Self-adaptive image enhancement method | |
CN113327206B (en) | Image fuzzy processing method of intelligent power transmission line inspection system based on artificial intelligence | |
CN115456905A (en) | Single image defogging method based on bright and dark region segmentation | |
Huang et al. | Efficient contrast enhancement with truncated adaptive gamma correction | |
CN116934761B (en) | Self-adaptive detection method for defects of latex gloves | |
CN110415185B (en) | Improved Wallis shadow automatic compensation method and device | |
CN112529824A (en) | Image fusion method based on self-adaptive structure decomposition | |
CN113256533B (en) | Self-adaptive low-illumination image enhancement method and system based on MSRCR | |
CN114429426B (en) | Low-illumination image quality improvement method based on Retinex model | |
CN114359083B (en) | High-dynamic thermal infrared image self-adaptive preprocessing method for interference environment | |
CN111882529A (en) | Display screen Mura defect detection method and device based on Gaussian-Surface edge preserving filtering | |
CN113313670B (en) | Underwater illumination non-uniform image enhancement method based on alternate direction multiplier method | |
Sivakumari | Comparison of diverse enhancement techniques for breast mammograms | |
CN113160073B (en) | Remote sensing image haze removal method combining rolling deep learning and Retinex theory | |
CN112581411B (en) | Image defogging method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD. Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD. |