CN113379660B - Multi-dimensional rule multi-focus image fusion method and system - Google Patents
Multi-dimensional rule multi-focus image fusion method and system Download PDFInfo
- Publication number
- CN113379660B CN113379660B CN202110658348.XA CN202110658348A CN113379660B CN 113379660 B CN113379660 B CN 113379660B CN 202110658348 A CN202110658348 A CN 202110658348A CN 113379660 B CN113379660 B CN 113379660B
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- frequency
- original
- original images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 171
- 230000009466 transformation Effects 0.000 claims abstract description 62
- 238000005457 optimization Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 32
- 238000007499 fusion processing Methods 0.000 claims description 16
- 230000000877 morphologic effect Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 11
- 230000000903 blocking effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application relates to a multi-focus image fusion method and a multi-dimensional regular multi-focus image fusion system, which belong to the field of image processing technology, wherein the image fusion method comprises an original image transformation step, an image fusion step, an initial image acquisition and processing step, an image optimization step, an image reconstruction step and a judgment step; the image fusion system comprises an original image transformation module, an image fusion module, an initial image acquisition and processing module, an image optimization module, an image reconstruction module and a judgment module. Compared with the related art, the image fusion method and device have the effect of improving the problem that the image fusion efficiency is low.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a multi-focus image fusion method and system with multi-dimensional rules.
Background
As is well known, the focus range of visible light imaging systems is limited, and therefore, it is difficult to clearly obtain all objects in the same scene. Images that are well focused have sharper details than images that are out of focus. The image fusion refers to that image data which are collected by a multi-source channel and related to the same target are subjected to image processing, computer technology and the like, beneficial information in respective channels is extracted to the maximum extent, and finally high-quality images are synthesized, so that the utilization rate of image information is improved, the computer interpretation precision and reliability are improved, the spatial resolution and the spectral resolution of original images are improved, and monitoring is facilitated. The method is a method for improving the image information quality, and the fused image is richer than any original image information.
The multi-focus image fusion is divided into two types, one type is to perform image fusion on a clear part of a source image in a spatial domain, but the method relies on an adopted segmentation algorithm, block effect is easy to generate, and the quality of a fused image is greatly influenced; the other is fusion by using the coefficients of multi-scale transformation, which takes a long time in the image fusion process and is not sufficient for global selection of the target object.
In view of the above-mentioned related art, the inventor believes that image fusion by any of the above-mentioned two methods of multi-focus image fusion results in low image fusion efficiency due to blocking effect or long time consumption.
Disclosure of Invention
In order to solve the problem of low image fusion efficiency, the application provides a multi-focus image fusion method and system with multi-dimensional rules.
In a first aspect, the present application provides a multi-focus image fusion method with a multi-dimensional rule, which adopts the following technical scheme:
a multi-focus image fusion method with multi-dimensional rules comprises the following steps,
an original image transformation step, namely performing Haar wavelet transformation and Haar wavelet inverse change on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images;
an image fusion step, adopting different pixel level fusion rules to fuse the first low-frequency images of the two original images to obtain a second low-frequency image, and fusing the first high-frequency images of the two original images to obtain a second high-frequency image;
the method comprises the steps of initial image obtaining and processing, wherein a second low-frequency image and a second high-frequency image are added to obtain an initial image, and the mean square error between the initial image and two original images is calculated to obtain the respective mean square error values of the two original images;
an image optimization step, namely obtaining a first fusion Map based on respective mean square error values of two original images, and optimizing the first fusion Map by adopting morphological operation to obtain a second fusion Map ′ ;
An image reconstruction step based on the second fusion Map ′ And two original images to obtain a reconstructed intermediate full-focus image; and the number of the first and second groups,
and a judging step, namely judging whether an original image which is not subjected to fusion processing exists, if so, taking the intermediate full-focus image as a new original image, taking any one of the original images which are not subjected to fusion processing as another new original image, and re-entering the original image converting step, otherwise, taking the intermediate full-focus image as a final full-focus image.
By adopting the technical scheme, the first low-frequency image and the first high-frequency image are obtained by adopting Haar wavelet transformation and Haar wavelet inverse change, the calculation complexity in the original image transformation process can be simplified, thereby saving time and reducing time consumption, and adopting the pixel-level fusion rule to obtain a second high-frequency image, a second low-frequency image and an initial image and further obtain a first fusion map, optimizing the first fusion map to obtain a second fusion map, reconstructing based on the second fusion map and the original image to obtain an intermediate all-focal image subjected to pixel level fusion, and finally obtaining a reconstructed final all-focal image after the judging step, so that the blocking effect can be avoided to a certain extent, therefore, the quality of the reconstructed final full-focus image can be improved, the time consumption is reduced, and the problem of low image fusion efficiency is improved.
Optionally, the specific method of the image optimization step includes,
obtaining the mean square error of the pixels of each pixel point of the two original images based on the respective mean square errors of the two original images;
respectively comparing the mean square errors of the pixels of the same pixel points of the two original images, and determining the source of the pixels of the corresponding pixel points of the first fused image Map to obtain a first fused image Map; and the number of the first and second groups,
optimizing the first fusion image Map by adopting morphological operation to obtain a second fusion Map ′ 。
By adopting the technical scheme, after the corresponding mean square errors of the pixels of the two original images and the initial image at each pixel point are calculated, the mean square errors of the same pixel points are compared to obtain the source of the pixels of the corresponding pixel points of the Map of the first fusion image, so that the Map of the first fusion image can be obtained, and the Map of the second fusion Map can be conveniently obtained ′ 。
Optionally, the specific method for obtaining the first fused image Map includes,
respectively comparing the mean square errors of the pixels of the same pixel points of the two original images, and selecting the pixel of the corresponding pixel point of the original image with the minimum mean square error as a source of the pixel of the corresponding pixel point of the first fusion map to obtain the source of the pixel of each pixel point of the first fusion map; and the number of the first and second groups,
and obtaining the first fusion map based on the source of the pixel of each pixel point of the first fusion map.
By adopting the technical scheme, the pixel of the pixel point with the minimum mean square error is better, namely clearer, so that the obtained first fusion map comprises the better pixels in the two original images, the quality of the reconstructed intermediate full-focus image/final full-focus image is improved, and the image fusion efficiency is improved.
Optionally, the specific method of the original image transformation step includes,
respectively carrying out Haar wavelet transformation on any two original images with different focuses of the same image object, and extracting respective low-frequency coefficients of the two original images;
keeping the respective low-frequency coefficients of the two original images, setting other coefficients to be 0, and respectively performing Haar wavelet inverse transformation on the two original images to obtain respective first low-frequency images of the two original images; and the number of the first and second groups,
and subtracting the corresponding first low-frequency images from the two original images to obtain the first high-frequency images of the two original images.
By adopting the technical scheme, the low-frequency coefficient can be extracted by performing the Haar wavelet transform on the original image, the first low-frequency image is obtained by performing the Haar wavelet inverse transform on the original image with the low-frequency system reserved, the first low-frequency image is subtracted from the original image to obtain the first high-frequency image, the calculation process is simple, the calculation complexity is greatly reduced, the time consumption in the image fusion process can be reduced, and the first low-frequency image and the first high-frequency image can be conveniently obtained.
Optionally, the specific method of the image fusion step includes,
adopting an average value fusion rule to fuse the first low-frequency images of the two original images to obtain a second low-frequency image; and the number of the first and second groups,
and fusing the first high-frequency images of the two original images by adopting an image fusion rule based on the Laplace energy and the index to obtain a second high-frequency image.
By adopting the technical scheme, the first low-frequency image and the first high-frequency image are respectively fused by adopting the average value fusion and the image fusion rule based on the Laplace energy sum, so that the second low-frequency image and the second high-frequency image can be conveniently obtained.
Optionally, the specific method for obtaining the second high-frequency image includes,
calculating the sum of Laplacian energy of pixels of each pixel point for the first high-frequency images of the two original images;
obtaining pixels, namely comparing the sum of the laplacian energies of the same pixel point of the two first high-frequency images, and selecting the pixel of the corresponding pixel point with the largest sum of the laplacian energies as the pixel of the corresponding pixel point of the second high-frequency image; and the number of the first and second groups,
and repeating the pixel acquisition, and determining the pixel of each pixel point of the second high-frequency image to obtain the second high-frequency image.
By adopting the technical scheme, according to the size of the Laplace energy sum, the pixels of the Laplace energy sum and the large pixel points are selected to serve as the pixels of the corresponding pixel points of the second high-frequency image, and therefore the obtained second high-frequency image can better reflect the focusing characteristic and the definition of the image.
In a second aspect, the present application provides a multi-focus image fusion system with a multi-dimensional rule, which adopts the following technical scheme:
a multi-focus image fusion system with multi-dimensional rules, the image fusion system comprising,
the original image transformation module is used for performing Haar wavelet transformation and Haar wavelet inverse transformation on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images;
the image fusion module is used for fusing the first low-frequency images of the two original images by adopting different pixel level fusion rules to obtain a second low-frequency image, and fusing the first high-frequency images of the two original images to obtain a second high-frequency image;
the initial image obtaining and processing module is used for adding the second low-frequency image and the second high-frequency image to obtain an initial image, and calculating the mean square error between the initial image and the two original images to obtain the respective mean square error values of the two original images;
the image optimization module is used for obtaining a first fusion Map based on respective mean square error values of the two original images and optimizing the first fusion Map by adopting morphological operation to obtain a second fusion Map ′ ;
An image reconstruction module for reconstructing the image based on the second fusion Map ′ And two original images to obtain a reconstructed intermediate full-focus image; and the number of the first and second groups,
and the judging module is used for judging whether the original images which are not subjected to fusion processing exist, if so, taking the intermediate full-focus image as a new original image, taking any one of the original images which are not subjected to fusion processing as another new original image, and reentering the original image transformation module, otherwise, the intermediate full-focus image is the final full-focus image.
By adopting the technical scheme, the original image transformation module adopts Haar wavelet transformation and Haar wavelet inverse transformation to obtain a first low-frequency image and a first high-frequency image, the complexity of calculation in the original image transformation process can be simplified, so that the time can be saved, the time consumption can be reduced, the image fusion module and the initial image acquisition and processing module adopt a pixel-level fusion rule to obtain a second high-frequency image, a second low-frequency image and an initial image, the image optimization module obtains a second fusion map from the obtained first fusion map, the image reconstruction module reconstructs based on the second fusion map and the original image to obtain an intermediate full-focus image fused by pixel level, and a reconstructed final full-focus image can be finally obtained after the judgment module, so that the block effect can be avoided to a certain extent, the quality of the reconstructed final full-focus image can be improved, and the time consumption can be reduced, and further the problem of low image fusion efficiency can be improved.
Optionally, the image optimization module specifically includes,
the mean square error acquisition submodule is used for acquiring the mean square error of the pixels of the two original images based on the respective mean square errors of the two original images;
the first fusion Map acquisition submodule is used for respectively comparing the mean square errors of pixels of the same pixel points of the two original images and determining the source of the pixels of the corresponding pixel points of the first fusion image Map so as to obtain the first fusion image Map; and the number of the first and second groups,
a second fusion Map obtaining submodule for optimizing the first fusion Map by morphological operation to obtain a second fusion Map ′ 。
By adopting the technical scheme, after the corresponding mean square errors of the pixels of the two original images and the initial image at each pixel point are calculated, the mean square errors of the same pixel points are compared to obtain the source of the pixels of the corresponding pixel points of the Map of the first fusion image, so that the Map of the first fusion image can be obtained, and the Map of the second fusion Map can be conveniently obtained ′ 。
Optionally, the original image transformation module includes,
the low-frequency coefficient extraction sub-module is used for respectively carrying out Haar wavelet transformation on any two original images with different focuses of the same image object and extracting respective low-frequency coefficients of the two original images;
the first low-frequency image acquisition sub-module is used for reserving respective low-frequency coefficients of the two original images, setting other coefficients as 0, and respectively performing Haar wavelet inverse transformation on the two original images to obtain respective first low-frequency images of the two original images;
and the first high-frequency image acquisition sub-module is used for subtracting the corresponding first low-frequency images from the two original images respectively to obtain the respective first high-frequency images of the two original images.
By adopting the technical scheme, the low-frequency coefficient extraction sub-module can extract the low-frequency coefficient by performing Haar wavelet transform on the original image, the first low-frequency image acquisition sub-module performs Haar wavelet inverse transform on the original image of the reserved low-frequency system to obtain a first low-frequency image, and the first high-frequency image acquisition sub-module subtracts the first low-frequency image from the original image to obtain a first high-frequency image.
In a third aspect, the present application provides a computer readable storage, which adopts the following technical solution:
a computer readable storage medium storing a computer program capable of being loaded by a processor and performing a method as in any one of the first aspects.
Drawings
Fig. 1 is a first flowchart of a multi-dimensional regular multi-focus image fusion method according to an embodiment of the present application.
Fig. 2 is a second flowchart of a multi-dimensional rule multi-focus image fusion method according to an embodiment of the present application.
Fig. 3 is a third flowchart of a multi-dimensional regular multi-focus image fusion method according to an embodiment of the present application.
Fig. 4 is a fourth flowchart of a multi-dimensional regular multi-focus image fusion method according to an embodiment of the present application.
Fig. 5 is a fifth flowchart of a multi-dimensional rule multi-focus image fusion method according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-5.
The Sum of Laplacian energy (SML) can reflect the edge feature information of an image, and can appropriately reflect the focusing characteristics and definition of the image to a certain extent.
Mean Squared Error (MSE), a measure reflecting the degree of difference between the estimated and estimated quantities, can be used to evaluate the magnitude of the Error in the data, and in this application, to evaluate the degree of similarity of selected regions in the image.
The morphological operation, a collection of shape-based series of image processing operations, is mainly based on morphological mathematics on the basis of a set theory. Morphology has four main operations: the method is mainly used for image and processing operation (denoising and shape simplification) image enhancement (contour extraction, refinement, convex hull and object marking), object background segmentation, object morphological quantification and other scenes, and the morphological operation object is a binary image.
Wavelet Transform (WT) is a new transform analysis method, which inherits and develops the idea of short-time fourier transform localization, overcomes the disadvantages that the window size does not change with frequency, and the like, and can provide a time-frequency window changing with frequency. The time (space) frequency can be analyzed locally, signals (functions) are refined in a multi-scale mode step by step through telescopic translation operation, and finally time subdivision at a high frequency position and frequency subdivision at a low frequency position are achieved, the requirements of time-frequency signal analysis can be automatically met, and therefore the time (space) frequency can be focused on any details of the signals.
The Haar wavelet transform is one of wavelets, is the simplest orthogonal normalization wavelet, and can be efficiently and simply realized. The image is wavelet transformed to obtain LL, HL, LH, HH and other coefficients. The essence of the wavelet transform is the down-sampling of the image, and after the image is subjected to the N-level wavelet transform, the size of the low-frequency component becomes 1/2 of the size of the original image N The low and high frequency information of the image can be quickly distinguished. The high frequency information contains most of the detail information in the image.
And (3) spatial domain fusion, namely calculating index parameters such as the mean value, the variance and the energy of pixel points or a certain region in various images respectively, and selecting the required pixel points to perform fusion by using a set rule. The performance of this method depends on the chosen criteria and rules, where the pixel level can be well evaluated for detail clarity, but lacks integrity. The region level fusion can achieve better visibility, but is not enough in image detail, namely block effect is easy to form, and certain detail information is easy to be ignored when the proportion of the detail information in the block is small.
Transform domain fusion: the traditional transform domain fusion basically belongs to coefficient level fusion, has better performance on details, but has long consumption time and is not beneficial to engineering application. At the same time, there are deficiencies in the overall representation of the object in the image, particularly where sharp and smeared portions intersect.
The embodiment of the application discloses a multi-focus image fusion method based on multi-dimensional rules. Referring to fig. 1 and 2, the multi-dimensional regular multi-focus image fusion method includes the following steps:
an original image transformation step 101 performs Haar wavelet transformation and Haar wavelet inverse transformation on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images.
Two or more original images with different focuses of the same image object can be selected, and any two of the two original images are selected first when fusion is performed. An original image subjected to the original image changing step 101 obtains a low frequency image and a high frequency image.
In the case of performing the Haar wavelet transform, the number of layers of the Haar wavelet transform may be set according to actual needs, and in the present embodiment, three-layer Haar wavelet transform is performed.
And an image fusion step 102, fusing the first low-frequency images of the two original images by adopting different pixel level fusion rules to obtain a second low-frequency image, and fusing the first high-frequency images of the two original images to obtain a second high-frequency image.
And an initial image obtaining and processing step 103, adding the second low-frequency image and the second high-frequency image to obtain an initial image, and calculating a mean square error between the initial image and the two original images to obtain respective mean square error values of the two original images.
An image optimization step 104, obtaining a first fusion Map based on respective mean square error values of the two original images, and optimizing the first fusion Map by adopting morphological operation to obtain a second fusion Map ′ 。
An image reconstruction step 105 based on the second fusion Map ′ And two original images to obtain a reconstructed intermediate full-focus image.
And a judging step 106, namely judging whether an original image which is not subjected to fusion processing exists, if so, taking the intermediate full-focus image as a new original image, taking any one of the original images which are not subjected to fusion processing as another new original image, and entering the original image converting step 101 again, otherwise, taking the intermediate full-focus image as the final full-focus image.
The original image not subjected to the fusion process and the two original images are images of the same image object at different angles. If the original image changing step 101 is entered again, the method proceeds to the image fusion step 102, the initial image obtaining and processing step 103, the image optimization step 104, the image reconstruction step 105 and the judgment step 106 in sequence until all the original images are fused.
In the embodiment of the image fusion method, the first low-frequency image and the first high-frequency image are obtained by adopting Haar wavelet transformation and Haar wavelet inverse change, the complexity of calculation in the original image transformation process can be simplified, so that the time can be saved, the time consumption can be reduced, the second high-frequency image, the second low-frequency image and the initial image are obtained by adopting the pixel-level fusion rule, the first fusion Map is further obtained, and the second fusion Map is obtained by optimizing the first fusion Map ′ Based on the second fusion Map ′ And reconstructing the original image to obtain an intermediate full-focus image subjected to pixel level fusion, and finally obtaining a reconstructed final full-focus image after the judging step 106, so that the blocking effect can be avoided to a certain extent, the quality of the reconstructed final full-focus image can be improved, the time consumption is reduced, and the problem of low image fusion efficiency is improved.
In other embodiments, a plurality of original images of the same image object with different focuses may be combined in pairs, then each group is respectively subjected to the original image transformation step 101, the image fusion step 102, the initial image acquisition and processing step 103, the image optimization step 104 and the image reconstruction step 105 to obtain a plurality of intermediate full-focus images, then the plurality of intermediate full-focus images are combined in pairs, and the original image transformation step 101, the image fusion step 102, the initial image acquisition and processing step 103, the image optimization step 104 and the image reconstruction step 105 are performed again, and so on until only one intermediate full-focus image is obtained finally, and the intermediate full-focus image at this time is the final full-focus image after reconstruction, which can help to improve the problem of low image fusion efficiency.
Referring to fig. 2 and 3, as an embodiment of the original image transformation step 101, a specific method of the original image transformation step 101 includes,
1011. and performing Haar wavelet transformation on any two original images with different focuses of the same image object respectively, and extracting the low-frequency coefficients of the two original images respectively.
When the two original images are respectively an image A and an image B, obtaining a low-frequency coefficient LL of the image A after 3-layer Haar wavelet transform A And low-frequency coefficient LL of image B B 。
1012. And keeping the low-frequency coefficients of the two original images, setting the other coefficients to be 0, and performing Haar wavelet inverse transformation on the two original images to obtain first low-frequency images of the two original images.
Obtaining a first low-frequency image A of the image A after Haar wavelet inverse transformation L And a first low-frequency image B of the B image L 。
1013. And subtracting the corresponding first low-frequency images from the two original images to obtain the first high-frequency images of the two original images.
Obtaining a first low-frequency image A of the image A L First low-frequency image B of sum image B L The respective first low frequency images are then subtracted from the images a and b, respectively, to obtain respective first high frequency images, i.e.,
wherein A is H Is the first high frequency image of image A, B H Is the first high frequency image of image B.
In the embodiment of the original image transformation step 101, the Haar wavelet transformation is performed on the original image to extract the low-frequency coefficient, the Haar wavelet inverse transformation is performed on the original image with the low-frequency system reserved to obtain the first low-frequency image, and the first low-frequency image is subtracted from the original image to obtain the first high-frequency image.
Referring to fig. 2 and 4, as an embodiment of the image fusion step 102, a specific method of the image fusion step 102 is as follows:
1021. and adopting an average value fusion rule to fuse the first low-frequency images of the two original images to obtain a second low-frequency image.
When the first low-frequency image A of the image A is known L First low-frequency image B of sum image B L Time, second low frequency image C L In order to realize the purpose,
1022. and calculating the sum of Laplace energy of the pixels of each pixel point for the first high-frequency images of the two original images.
For image a, the laplacian energy and calculation process is as follows:
for image B, the laplacian energy and calculation process is as follows:
wherein,the position of each pixel point in the image is represented, the range of i and j is related to the size of the image/picture, and both i and j are natural numbers. N denotes the size of the image/picture, e.g. the image size isWhen the values of i and j are both natural numbers from 1 to N. Step represents the Step size, i.e. how many pixels apart the calculation is made.
1023. And obtaining pixels, namely comparing the sum of the laplacian energies of the same pixel points of the two first high-frequency images, and selecting the pixel of the corresponding pixel point with the largest sum of the laplacian energies as the pixel of the corresponding pixel point of the second high-frequency image.
And comparing the Laplace energy sums of the same pixel points of the two first high-frequency images, and selecting the Laplace energy sum of the large pixel points corresponding to the Laplace energy sums as the pixel points corresponding to the second high-frequency image.
1024. And repeating the pixel acquisition, and determining the pixel of each pixel point of the second high-frequency image to obtain the second high-frequency image.
Fusing the first high-frequency images of the image A and the image B to obtain a second high-frequency image。
In the real-time mode of the image fusion step 102, the Haar wavelet transform is performed on the original image to extract the low-frequency coefficient, then the Haar wavelet inverse transform is performed on the original image with the low-frequency system reserved, so as to obtain a first low-frequency image, the first low-frequency image is subtracted from the original image, so as to obtain a first high-frequency image, and the average value fusion and the image fusion rule based on the laplace energy sum are adopted to respectively fuse the first low-frequency image and the first high-frequency image, so that the second low-frequency image and the second high-frequency image can be conveniently obtained.
Referring to fig. 2, as an embodiment of the initial image acquisition and processing step 103, a specific method of the initial image acquisition and processing step 103 includes:
and adding the second low-frequency image and the second high-frequency image to obtain an initial image.
Obtaining a second low-frequency image C L And a second high frequency image C H And then, adding up to obtain an initial image C,
and calculating the mean square error between the initial image and the two original images to obtain the respective mean square error value of the two original images.
Wherein the sizes of the window sizes of the initial image C, the image A and the image B are the same, and after the mean square error is calculated, the mean square error value MSE of the image A is obtained A Mean square error value MSE of sum image B B . Note that the size of the window size may be 3 × 3, 5 × 5, 7 × 7, or the like.
Referring to fig. 2 and 5, as an embodiment of the image optimization step 104, the specific steps of the image optimization step 104 are as follows:
1041. and obtaining the mean square error of the pixels of the two original images based on the respective mean square errors of the two original images.
The mean square error of the pixel points of the image A is MSE A (i, j) the mean square error of the pixels of image B is MSE B (i,j)。
1042. And comparing the mean square errors of the pixels of the same pixel points of the two original images respectively, and selecting the pixel of the corresponding pixel point of the original image with the minimum mean square error as the source of the pixel of the corresponding pixel point of the first fusion map to obtain the source of the pixel of each pixel point of the first fusion map.
When MSE A (i,j)≤MSE B (i, j), marking the first fusion Map (i, j) as 1, and indicating that the position of the pixel point of (i, j) of the first fusion image Map is from the image A; when MSE A (i,j)≤MSE B When (i, j) is detected, marking the first fusion Map (i, j) as 0 and representing the pixel point of (i, j) of the first fusion image MapFrom image B. According to the above method, the first fusion map having the pixel of 0 or 1 can be obtained.
1043. And obtaining the first fusion map based on the source of the pixel of each pixel point of the first fusion map.
1044. Optimizing the first fusion image Map by adopting morphological operation to obtain a second fusion Map ′ 。
In the embodiment of the image optimization step 104, the pixel of the pixel point with the minimum mean square error is better, that is, clearer, so that the obtained first fusion map includes better pixels in the two original images, which is helpful for improving the quality of the intermediate full focus image/the final full focus image after reconstruction, and is further helpful for improving the image fusion efficiency.
Referring to the drawings, as an embodiment of the image reconstruction 105, a specific method of the image reconstruction 105 includes a step of reconstructing the image based on a second fusion Map ′ And two original images to obtain a reconstructed intermediate full-focus image,
Two of the original images, image a and image B.
The multi-focus image fusion method based on the multi-dimensional rules extracts a low-frequency image and a high-frequency image in a wavelet domain and performs multi-focus image fusion based on the multi-dimensional rules in a space domain. The visual effect of the high-quality image can be improved, the performance of video monitoring can be improved, and the terminal deployment cost can be saved.
The embodiment also discloses a multi-focus image fusion system with multi-dimensional rules, and referring to the figure, the image fusion system comprises,
the original image transformation module is used for performing Haar wavelet transformation and Haar wavelet inverse transformation on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images;
the image fusion module is used for fusing the first low-frequency images of the two original images by adopting different pixel level fusion rules to obtain second low-frequency images, and fusing the first high-frequency images of the two original images to obtain second high-frequency images;
the initial image obtaining and processing module is used for adding the second low-frequency image and the second high-frequency image to obtain an initial image, and calculating the mean square error between the initial image and the two original images to obtain the respective mean square error values of the two original images;
the image optimization module is used for obtaining a first fusion Map based on respective mean square error values of the two original images and optimizing the first fusion Map by adopting morphological operation to obtain a second fusion Map ′ ;
An image reconstruction module for reconstructing the image based on the second fusion Map ′ And two original images to obtain a reconstructed intermediate full-focus image;
and the judging module is used for judging whether the original images which are not subjected to fusion processing exist, if so, taking the intermediate full-focus image as a new original image, taking any one of the original images which are not subjected to fusion processing as another new original image, and reentering the original image transformation module, otherwise, the intermediate full-focus image is the final full-focus image.
In the embodiment of the image fusion system, the original image transformation module obtains the first low-frequency image and the first high-frequency image by adopting Haar wavelet transformation and Haar wavelet inverse transformation, and can simplify the complexity of calculation in the original image transformation process, thereby saving time and reducing time consumption, the image fusion module and the initial image acquisition and processing module obtain the second high-frequency image, the second low-frequency image and the initial image by adopting a pixel-level fusion rule, and the image optimization module optimizes the obtained first fusion Map to obtain the second fusion Map ′ The image reconstruction module is based on a second fusion Map ′ Reconstructing the image with the original image to obtain an intermediate full-focus image subjected to pixel level fusion, and finally obtaining the reconstructed image after passing through a judgment moduleThe final constructed all-focus image can avoid the blocking effect to a certain extent, so that the quality of the final constructed all-focus image can be improved, the time consumption can be reduced, and the problem of low image fusion efficiency can be solved.
As an embodiment of the original image transformation module, the original image transformation module includes,
the low-frequency coefficient extraction sub-module is used for respectively carrying out Haar wavelet transformation on any two original images with different focuses of the same image object and extracting respective low-frequency coefficients of the two original images;
the first low-frequency image acquisition sub-module is used for reserving respective low-frequency coefficients of the two original images, setting other coefficients as 0, and respectively performing Haar wavelet inverse transformation on the two original images to obtain respective first low-frequency images of the two original images;
and the first high-frequency image acquisition sub-module is used for subtracting the corresponding first low-frequency images from the two original images respectively to obtain the first high-frequency images of the two original images respectively.
In the embodiment of the original image transformation module, the low-frequency coefficient extraction submodule performs Haar wavelet transformation on the original image to extract the low-frequency coefficient, the first low-frequency image acquisition submodule performs Haar wavelet inverse transformation on the original image with the low-frequency system reserved to obtain the first low-frequency image, the first high-frequency image acquisition submodule subtracts the first low-frequency image from the original image to obtain the first high-frequency image, the calculation process is simple, the calculation complexity is greatly reduced, the time consumption in the image fusion process can be reduced, and the first low-frequency image and the first high-frequency image can be conveniently obtained.
As an embodiment of the image optimization module, the image optimization module includes,
the mean square error acquisition submodule is used for acquiring the mean square error of the pixels of the two original images based on the respective mean square errors of the two original images;
the first fusion Map acquisition submodule is used for respectively comparing the mean square errors of pixels of the same pixel points of the two original images and determining the source of the pixels of the corresponding pixel points of the first fusion image Map so as to obtain the first fusion image Map;
a second fusion Map obtaining submodule for optimizing the first fusion Map by morphological operation to obtain a second fusion Map ′ 。
In the embodiment of the image optimization module, after calculating the corresponding mean square errors of the pixels of the two original images and the initial image at each pixel point, the mean square errors of the same pixel point are compared to obtain the source of the pixel of the corresponding pixel point of the first fusion image Map, so that the first fusion image Map can be obtained, and the second fusion Map can be conveniently obtained ′ 。
The embodiment of the application also discloses a computer readable storage medium, which stores a computer program capable of being loaded by a processor and executing any one of the multi-focus image fusion methods such as a multi-dimensional rule
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.
Claims (5)
1. A multi-focus image fusion method based on multi-dimensional rules is characterized by comprising the following steps: the image fusion method comprises an original image transformation step (101) of performing Haar wavelet transformation and Haar wavelet inverse transformation on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images; an image fusion step (102) for fusing the first low-frequency images of the two original images by adopting different pixel level fusion rules to obtain a second low-frequency image, and fusing the first high-frequency images of the two original images to obtain a second high-frequency image; an initial image obtaining and processing step (103) of adding the second low-frequency image and the second high-frequency image to obtain an initial image, and calculating a mean square error between the initial image and the two original images to obtain respective mean square error values of the two original images; an image optimization step (104) for obtaining a first fusion image based on respective mean square error values of the two original images, and optimizing the first fusion image by adopting morphological operation to obtain a second fusion image; an image reconstruction step (105) for obtaining a reconstructed intermediate through-focus image on the basis of the second fusion image and the two original images; and a judging step (106) of judging whether an original image which is not subjected to fusion processing exists, if so, taking the intermediate full-focus image as a new original image, taking any one of the original images which are not subjected to fusion processing as another new original image, and re-entering the original image transforming step (101), otherwise, taking the intermediate full-focus image as a final full-focus image;
the specific method of the image fusion step (102) comprises the steps of adopting an average value fusion rule to fuse a first low-frequency image of two original images to obtain a second low-frequency image; fusing the first high-frequency images of the two original images by adopting an image fusion rule based on the Laplace energy and the index to obtain a second high-frequency image;
the specific method for obtaining the second high-frequency image comprises the steps of calculating the sum of Laplace energy of pixels of each pixel point for the first high-frequency images of the two original images; obtaining pixels, namely comparing the sum of the laplacian energies of the same pixel point of the two first high-frequency images, and selecting the pixel of the corresponding pixel point with the largest sum of the laplacian energies as the pixel of the corresponding pixel point of the second high-frequency image; repeatedly acquiring pixels, and determining the pixels of all pixel points of the second high-frequency image to obtain the second high-frequency image; the specific method of the image optimization step (104) comprises the steps of obtaining the mean square error of the pixel of each pixel point of the two original images based on the respective mean square errors of the two original images; respectively comparing the mean square errors of pixels of the same pixel points of the two original images, and determining the source of the pixels of the corresponding pixel points of the first fusion image to obtain a first fusion image; optimizing the first fusion image by adopting morphological operation to obtain a second fusion image;
the specific method for obtaining the first fusion image comprises the steps of respectively comparing the mean square errors of pixels of the same pixel points of two original images, and selecting the pixel of the corresponding pixel point of the original image with the minimum mean square error as the source of the pixel of the corresponding pixel point of the first fusion image so as to obtain the source of the pixel of each pixel point of the first fusion image; and obtaining a first fusion image based on the source of the pixel of each pixel point of the first fusion image.
2. The multi-focus image fusion method according to claim 1, wherein: the specific method of the original image conversion step (101) includes performing two arbitrary original images of the same image object at different focuses
Haar wavelet transform, extracting respective low-frequency coefficients of the two original images; keeping the respective low-frequency coefficients of the two original images, setting the other coefficients to be 0, and respectively performing Haar wavelet inverse transformation on the two original images to obtain respective first low-frequency images of the two original images; and subtracting the corresponding first low-frequency images from the two original images to obtain respective first high-frequency images of the two original images.
3. A multi-focus image fusion system with multi-dimensional rules is characterized in that: the image fusion system comprises an original image transformation module, a first image fusion module and a second image fusion module, wherein the original image transformation module is used for performing Haar wavelet transformation and Haar wavelet inverse transformation on any two original images with different focuses of the same image object to obtain a first low-frequency image and a first high-frequency image of each of the two original images; the image fusion module is used for adopting different pixel level fusion rules to fuse a first low-frequency image of two original images by adopting an average value fusion rule to obtain a second low-frequency image, calculating the sum of the Laplace energy of pixels of each pixel point of the first high-frequency images of the two original images, comparing the sum of the Laplace energy of the same pixel point of the two first high-frequency images, selecting the pixel of the corresponding pixel point with the largest sum of the Laplace energy as the pixel of the corresponding pixel point of the second high-frequency image, repeatedly obtaining the pixels, determining the pixel of each pixel point of the second high-frequency image, and adopting the Laplace energy and the largest principle to fuse to obtain the second high-frequency image; the initial image obtaining and processing module is used for adding the second low-frequency image and the second high-frequency image to obtain an initial image, and calculating the mean square error between the initial image and the two original images to obtain the respective mean square error values of the two original images; the image optimization module specifically comprises a mean square error acquisition submodule and a mean square error acquisition submodule, wherein the mean square error acquisition submodule is used for acquiring the mean square error of pixels of each pixel point of the two original images based on the respective mean square errors of the two original images; the first fusion image acquisition sub-module is used for respectively comparing the mean square errors of pixels of the same pixels of the two original images, selecting the pixel of the corresponding pixel of the original image with the minimum mean square error as the source of the pixel of the corresponding pixel of the first fusion image to obtain the source of the pixel of each pixel of the first fusion image, and obtaining the first fusion image based on the source of the pixel of each pixel of the first fusion image; the second fusion image acquisition sub-module is used for optimizing the first fusion image by adopting morphological operation to obtain a second fusion image; the image reconstruction module is used for obtaining a reconstructed middle full-focus image based on the second fusion image and the two original images; and the judging module is used for judging whether the original images which are not subjected to the fusion processing exist, if so, the intermediate full-focus image is used as a new original image, any one of the original images which are not subjected to the fusion processing is used as another new original image, and the original image enters the original image conversion module again, otherwise, the intermediate full-focus image is the final full-focus image.
4. The multi-dimensional regular multi-focus image fusion system of claim 3, wherein: the original image transformation module comprises a low-frequency coefficient extraction submodule and a low-frequency coefficient extraction submodule, wherein the low-frequency coefficient extraction submodule is used for respectively carrying out Haar wavelet transformation on any two original images with different focuses of the same image object and extracting the respective low-frequency coefficients of the two original images; the first low-frequency image acquisition sub-module is used for reserving respective low-frequency coefficients of the two original images, setting other coefficients to be 0, and respectively performing Haar wavelet inverse transformation on the two original images to obtain respective first low-frequency images of the two original images; and the first high-frequency image acquisition sub-module is used for subtracting the corresponding first low-frequency images from the two original images respectively to obtain the respective first high-frequency images of the two original images.
5. A computer-readable storage medium characterized by: a computer program that can be loaded by a processor and that executes a multi-focus image fusion method according to any one of the multi-dimensional rules as claimed in claims 1 to 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110658348.XA CN113379660B (en) | 2021-06-15 | 2021-06-15 | Multi-dimensional rule multi-focus image fusion method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110658348.XA CN113379660B (en) | 2021-06-15 | 2021-06-15 | Multi-dimensional rule multi-focus image fusion method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379660A CN113379660A (en) | 2021-09-10 |
CN113379660B true CN113379660B (en) | 2022-09-30 |
Family
ID=77574362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110658348.XA Active CN113379660B (en) | 2021-06-15 | 2021-06-15 | Multi-dimensional rule multi-focus image fusion method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379660B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632354A (en) * | 2012-08-24 | 2014-03-12 | 西安元朔科技有限公司 | Multi focus image fusion method based on NSCT scale product |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632353A (en) * | 2012-08-24 | 2014-03-12 | 西安元朔科技有限公司 | Multi focus image fusion algorithm based on NSCT |
CN104077762A (en) * | 2014-06-26 | 2014-10-01 | 桂林电子科技大学 | Multi-focusing-image fusion method based on NSST and focusing area detecting |
CN105844606A (en) * | 2016-03-22 | 2016-08-10 | 博康智能网络科技股份有限公司 | Wavelet transform-based image fusion method and system thereof |
CN105913407B (en) * | 2016-04-06 | 2018-09-28 | 昆明理工大学 | A method of poly focal power image co-registration is optimized based on differential chart |
CN107194903A (en) * | 2017-04-25 | 2017-09-22 | 阜阳师范学院 | A kind of multi-focus image fusing method based on wavelet transformation |
-
2021
- 2021-06-15 CN CN202110658348.XA patent/CN113379660B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632354A (en) * | 2012-08-24 | 2014-03-12 | 西安元朔科技有限公司 | Multi focus image fusion method based on NSCT scale product |
Also Published As
Publication number | Publication date |
---|---|
CN113379660A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Wavelet-based dual-branch network for image demoiréing | |
Li et al. | Survey of single image super‐resolution reconstruction | |
CN109242888B (en) | Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation | |
EP2950267B1 (en) | Image denoising method and image denoising apparatus | |
CN109509163B (en) | FGF-based multi-focus image fusion method and system | |
CN110443761B (en) | Single image rain removing method based on multi-scale aggregation characteristics | |
CN112508810A (en) | Non-local mean blind image denoising method, system and device | |
Wang et al. | Multi-direction dictionary learning based depth map super-resolution with autoregressive modeling | |
CN103020898B (en) | Sequence iris image super resolution ratio reconstruction method | |
KR102402677B1 (en) | Method and apparatus for image convergence | |
Kaur et al. | Survey on multifocus image fusion techniques | |
Zhao et al. | A deep cascade of neural networks for image inpainting, deblurring and denoising | |
CN104657951A (en) | Multiplicative noise removal method for image | |
CN113221925A (en) | Target detection method and device based on multi-scale image | |
CN116091322B (en) | Super-resolution image reconstruction method and computer equipment | |
Arulkumar et al. | Super resolution and demosaicing based self learning adaptive dictionary image denoising framework | |
CN109003247B (en) | Method for removing color image mixed noise | |
CN113379660B (en) | Multi-dimensional rule multi-focus image fusion method and system | |
CN112508828A (en) | Multi-focus image fusion method based on sparse representation and guided filtering | |
Du et al. | Dehazing Network: Asymmetric Unet Based on Physical Model | |
CN112927169B (en) | Remote sensing image denoising method based on wavelet transformation and improved weighted kernel norm minimization | |
Choi et al. | Fast super-resolution algorithm using ELBP classifier | |
CN114820850B (en) | Image sparse reconstruction method based on attention mechanism | |
KR101650897B1 (en) | Window size zooming method and the apparatus for lower resolution contents | |
CN110322409B (en) | Improved wavelet transform image fusion method based on labeled graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |