CN109447930B - Wavelet domain light field full-focusing image generation algorithm - Google Patents
Wavelet domain light field full-focusing image generation algorithm Download PDFInfo
- Publication number
- CN109447930B CN109447930B CN201811259275.1A CN201811259275A CN109447930B CN 109447930 B CN109447930 B CN 109447930B CN 201811259275 A CN201811259275 A CN 201811259275A CN 109447930 B CN109447930 B CN 109447930B
- Authority
- CN
- China
- Prior art keywords
- image
- light field
- images
- frequency
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title abstract description 62
- 230000004927 fusion Effects 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000011156 evaluation Methods 0.000 claims abstract description 15
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 abstract description 8
- 230000000007 visual effect Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 238000011426 transformation method Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a wavelet domain light field full focusing image generation algorithm, belonging to the field of full focusing image fusion, which effectively avoids the block effect of the traditional airspace light field image fusion algorithm, obtains a light field full focusing image with higher quality, obtains a multi-focusing image for full focusing image fusion by performing spatial transformation and projection on 4D light field data obtained by a micro-lens array light field camera, performs wavelet decomposition on each frame of multi-focusing image to extract high and low frequency sub-image sets, provides a region balance Laplace operator and a pixel visibility function to respectively construct high and low frequency wavelet coefficients of the fused image to realize image fusion, has performance superior to that of the traditional region definition evaluation function, and experimentally verifies the correctness and the effectiveness of the method provided by the invention, adopts original data of a Lytro light field camera to calculate the fused full focusing image, compared with the traditional image fusion algorithm, the visual effect of human eyes is better, and the objective image index is also improved.
Description
Technical Field
The invention belongs to the field of full-focus image fusion, and particularly relates to a wavelet domain light field full-focus image generation algorithm.
Background
With the rise of the new subject field of computational photography and the development of the optical field imaging theory, the optical field camera becomes a focus of attention in many fields in China and abroad in the last decade. Compared with the traditional camera, the micro-lens light field camera is added with the micro-lens array behind the main lens and simultaneously records the position and direction information of space light rays. The recording of the multi-dimensional light field information provides convenience for the processing and application of the later images of the light field camera, such as digital refocusing, full focus image generation, depth information calculation and the like by using the light field camera. The light field camera can calculate refocused images of any depth in space after single photographing, and is the most prominent technical bright point for the light field camera to pay attention to generally. Based on the method, the acquisition of high-quality texture images of various light fields and the calculation of high-precision depth information are deeply researched. Because the method is not limited by the fact that a traditional camera obtains multi-focus images through multi-focusing shooting, the full-focus image fusion based on the light field digital refocusing technology becomes an important branch application of the light field camera, and the method is also significant for super-resolution reconstruction of later texture images and depth images and generation of light field video files.
At present, the full-focus fusion of the traditional image is mainly divided into a spatial domain and a transform domain, the spatial domain is subjected to definition evaluation based on pixels or blocks, and pixels with good quality are extracted from different images to form a full-focus image, so that the calculation time is fast, but the block effect problem exists. The transformation domain decomposes the image into sub-images of different resolution layers or different frequency bands, and a refocusing image is constructed by evaluating and reconstructing the resolution layers or the sub-images, so that the blocking effect can be effectively avoided. As a common method of transform domain, wavelet transform decomposes an image to be fused into a series of frequency channels, high-frequency and low-frequency sub-images are constructed by utilizing a decomposed tower-shaped structure, and the high-frequency and low-frequency sub-images are fused respectively and then are subjected to wavelet inverse transform to obtain a full-focus image. The quality of the wavelet transformation fusion image is determined by the selection of the fusion rule of the high-frequency sub-image and the low-frequency sub-image: for low-frequency sub-images, fusion is generally realized by adopting a mean value calculation method; for the high-frequency sub-images, a Sobel operator, a Prewitt operator, a Laplacian operator and the like are often adopted for evaluation to establish a fusion rule.
The second derivatives of the traditional Laplacian in the x direction and the y direction are likely to have opposite signs, the anti-noise capability of the existing Laplacian algorithm is low, and the micro-lens calibration error can cause a refocused image to generate local noise. In addition, the conventional low-frequency signal fusion weighted average method can reduce the contrast of the fused image and lose some useful information in the original image.
Disclosure of Invention
The invention aims to solve the problems that the contrast of a shot image of the existing civil light field camera is not high, the resolution of a multi-focus image set obtained by a digital refocusing technology is limited, and local noise caused by a calibration error exists, and provides a wavelet domain-based light field full-focus image generation algorithm.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: the wavelet domain light field full-focus image generation algorithm is realized according to the following steps:
step 1): decoding the data of the original image of the light field to obtain a 4D light field, and selecting different alphan(n-1, 2,3 …) and obtaining refocused images of different spatial depths using digital refocusing techniques
Step 3) respectively adopting a region-balanced Laplacian BL operator and a pixel visibility PV function as image fusion definition evaluation indexes for high-frequency sub-images and low-frequency sub-images to realize the fusion of high-frequency coefficients and low-frequency coefficients;
and 4) performing inverse wavelet transform on the high and low frequency coefficients to obtain a fused full focus image.
Further, the BL operator used in step 3) is an area-balanced laplacian operator, and an expression of the operator is as follows:
wherein sxt represents equalization region size, and S, T can only take odd numbers; s and t represent second derivative step length in the horizontal and vertical directions;representing a weight factor, the closer the point is to the center point, the greater the weight factor, the greater the contribution to the laplacian operator value,conversely, the farther from the center point, the smaller the contribution to the laplacian operator value.
Further, the PV function in step 3) is a pixel visibility function, and the specific expression is as follows:
the S multiplied by T represents a rectangular neighborhood taking the current pixel point as the center, and S, T can only take odd numbers; s and t represent scanning steps in the horizontal and vertical directions in the rectangular neighborhood;the mean gray value of the pixels of the S × T region is represented.
Further, the low-frequency coefficient and the high-frequency coefficient have the same fusion rule, and taking the high-frequency coefficient fusion as an example, the rule is as follows:
wherein,representing respective high-frequency sub-images of refocused images with different spatial depths after wavelet decomposition, wherein N is 1,2,3 … N, and N represents the number of refocused image frames participating in full-focus image fusion;representing the difference value of the balanced Laplacian of the corresponding point of any 2 high-frequency subimages; max [. C]、min[·]The maximum value and the minimum value are taken; hHTo self-define the threshold value (H)HTaking 0.1 because the difference between the two equalized laplacian values is small and can be ignored when the difference between the two equalized laplacian values is less than 0.1), taking the high-frequency coefficient corresponding to the equalized laplacian energy maximum in the N-frame images as a fusion coefficient when the minimum value of the difference is greater than a threshold, and taking the high-frequency coefficient corresponding to the equalized laplacian energy maximum in the multi-frame images when the difference between the two equalized laplacian values is less than the thresholdMultiplying the frequency coefficient by a weighting factor to determine a final blending coefficient, wherein the weighting factor
The invention adopts a wavelet transformation method to perform image fusion. Firstly, decoding a 4D light field, obtaining multi-focus images with different depths by adopting a digital refocusing algorithm, constructing high and low frequency sub-image sets by performing wavelet decomposition and tower type reconstruction on each multi-focus image set, and finally providing a region equalization Laplace operator and a pixel visibility function to respectively construct high and low frequency wavelet coefficients of a fused image to realize image fusion. The algorithm effectively realizes the conversion from the light field original data to the full-focus image, effectively avoids the blocking effect of the traditional spatial domain image fusion algorithm, obtains the light field full-focus image with higher quality, and improves the image fusion quality compared with the traditional algorithm.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of the algorithm of the present invention.
FIG. 2 is a light field biplane parameterized model.
Fig. 3 is a digital refocusing schematic diagram of a light field camera.
Fig. 4 is a diagram of the BL operator.
Fig. 5 is a demonstration diagram of the fusion process of Leaves sample images.
FIG. 6 is a comparison graph of different fusion algorithms of the Flower sample image.
FIG. 7 is a comparison graph of different fusion algorithms of Forest sample images.
FIG. 8 is a comparison of different fusion algorithms for Zither sample images.
Detailed Description
In order to make the objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
As shown in fig. 1, the specific process of the algorithm is as follows:
step 1) the 4D light field is obtained after the light field original image is subjected to data decoding,selecting different alphan(n-1, 2,3 …) and obtaining refocused images of different spatial depths using digital refocusing techniques
Step 3) fusing high and low frequency coefficients by respectively adopting a BL operator and a PV function as image fusion definition evaluation indexes;
and 4) finally, obtaining the fused full focus image through wavelet inverse transformation.
The specific process of step 1) is as follows: according to the biplane parameterized model of the light field, as shown in fig. 2, any light ray in space can be determined by the intersection point of the light ray and the two planes, the main lens plane of the light field camera is set as (u, v) plane, the sensor plane is set as (x, y) plane, and the 4D light field recorded by the light field camera is set as LF(x, y, u, v), the integral image of the focal plane of the all-optical camera can be obtained by the classical optical radiation formula:
where F denotes the distance between the main lens plane and the focal plane, and X Y U V denotes the 4D light field matrix LF(x, y, u, v). If the image plane is shifted from F to F', the new 4D light field matrix is LF′(x ', y', u ', v'), where the camera focal plane refocused image is represented as:
let F' be alphanF, taking a section of the 4D space to obtain the geometric relationship between the coordinates for the convenience of graphical representation, as shown in FIG. 3. According to the principle of similar triangle, canThe coordinates of the obtained new light field and the original light field meet the following requirements:
x′=u+(x-u)·αn=αn·x+(1-αn)·u (3)
u′=u (4)
the same can obtain:
y′=v+(y-v)·αn=αn·y+(1-αn)·v (5)
v′=v (6)
equations (3) - (6) can be expressed in matrix form:
wherein, [ x ', y', u ', v']TRepresents a line vector [ x ', y', u ', v']The transpose of (a) is performed,the coordinate transformation matrix is expressed in the following specific form:
equation (7) is also equivalent to the following equation:
according to equation (9), equation (2) can be rewritten as:
changing alphanThe purpose of changing the position of the image plane can be achieved, and then refocusing pictures with different spatial depths are obtained.
The specific process of step 2) is as follows: according to the wavelet transform image fusion theory, decomposing an image to be fused into a series of frequency channels through wavelet transform, and constructing high-frequency and low-frequency sub-images by utilizing the decomposed tower-shaped structure, wherein the process can be described as follows:
wherein (x, y) represents an image coordinate system, (i, j) represents a wavelet domain coordinate system, W [ ·]Representing wavelet tower decomposition operators, WH[·]Extraction of high-frequency coefficients (high-frequency subimages), W, after wavelet pyramid decompositionL[·]Low frequency coefficients (low frequency subimages) are extracted after the wavelet tower type decomposition is represented.
The BL operator adopted in the step 3) is a region balance Laplacian operator, and the expression of the operator is as follows:
wherein sxt represents equalization region size, and S, T can only take odd numbers; s and t represent second derivative step length in the horizontal and vertical directions;and representing a weight factor, wherein the closer the point to the central point, the larger the weight factor is, the larger the contribution to the Laplace operator value is, and conversely, the farther the point to the central point, the smaller the contribution to the Laplace operator value is. Fig. 4 shows the equalized laplacian operator when S is 5 and T is 5.
The high-frequency sub-image of the wavelet transform reflects the brightness abrupt change characteristic of the image, namely the boundary characteristic, the Laplacian can sharpen the boundary and lines in any trend, the isotropy characteristic is kept, and the method is widely used for evaluating the definition of the high-frequency sub-image. Aiming at the condition that the signs of the second derivatives of the Laplace operator in the x direction and the y direction are opposite most possibly, and meanwhile, the influence of peripheral points on the definition evaluation function of the current position is fully considered, the invention provides the area equalization Laplace operator, and the energy equalization is realized by increasing the number and the direction of the second derivatives.
Considering the defects that microlens calibration errors can cause a refocused image to generate local noise and a Laplacian is sensitive to the noise, bilateral filtering preprocessing is performed before high-frequency sub-images are fused, and the high-frequency coefficient fusion rule based on the area equalization Laplacian of the invention is as follows:
wherein,representing respective high-frequency sub-images of refocusing images with different spatial depths after wavelet decomposition, wherein N is 1,2, 3. N, and N represents the frame number of the refocusing images participating in the fusion of the full-focus images; d (BL)αn(i, j)) represents the difference of the balanced laplacian operator of the corresponding point of any 2 high-frequency subimages; max [. C]、min[·]The maximum value and the minimum value are taken; HH is a self-defined threshold (the HH is 0.1, because the difference value of the two balanced Laplacian values is smaller and can be ignored when the difference value is smaller than 0.1), when the minimum value of the difference value is larger than the threshold, the high-frequency coefficient corresponding to the balanced Laplacian value with the maximum energy in the N-frame images is taken as a fusion coefficient, and when the difference value of the two difference values is smaller than the threshold, the final fusion coefficient is determined by multiplying the high-frequency coefficients of the multiple frames of images by a weight factor, wherein the weight factor is used for determining the final fusion coefficient
And 2) obtaining a low-frequency coefficient by the medium and small wave tower type decomposition, wherein the low-frequency coefficient mainly reflects the average gray characteristic of the original image. The simplest method of calculating the low frequency fusion coefficients is weighted averaging, but weighted averaging reduces the contrast of the fused image and loses some useful information in the original image. In addition, some methods of calculating gradients, such as a spatial frequency method, a point sharpness operator, and the like, are also applied to the calculation of the low-frequency fusion coefficient. In the light field Image low-frequency coefficient fusion of the present invention, the concept of Image Visibility (VI) based on human visual characteristics is used for reference, and is defined as follows:
where P × Q represents the size of the image I (I, j);represents the average value of the image I (I, j); and gamma represents a visual constant, the value range of gamma is 0.6-0.7, and the larger the value of VI is, the higher the image visibility is represented.
In the process of fusing low-frequency sub-images, if the formula (15) is directly adopted for calculation, only the VI value of the whole image can be obtained, and the method cannot be used for region-level or pixel-level fusion of a plurality of images. In order to reasonably establish an effective low-frequency coefficient evaluation index, the formula (15) is improved, and a Pixel visibility function (PV) is established, wherein the specific expression is as follows:
the S multiplied by T represents a rectangular neighborhood taking the current pixel point as the center, and S, T can only take odd numbers; s and t represent scanning steps in the horizontal and vertical directions in the rectangular neighborhood;the mean gray value of the pixels of the S × T region is represented. In the low-frequency coefficient fusion process, the same fusion rule as the high-frequency coefficient is adopted.
And finally, performing wavelet inverse transformation on the fused high-low frequency coefficient to obtain a fused full-focus image.
The wavelet domain light field full-focus image generation algorithm of the invention is described in detail above, and the validity of the algorithm is verified by specific examples below.
The invention adopts an original image shot by the Lytro light field camera to carry out experiments. Fig. 5(a) shows a light field original, and fig. 5(b), (c), and (d) show three multi-focus images with different spatial depths calculated according to equation (10) when α is 0.52, α is 0.78, and α is 0.98, respectively, and the focal depth gradually changes from the foreground to the background. Fig. 5(e) is the full-focus image calculated by the method of the present invention, the definition of the area framed by the red dotted line is significantly higher than that of the area corresponding to the image (b), the definition of the area framed by the yellow dotted line is significantly higher than that of the area corresponding to the image (c), and the definition of the area framed by the white dotted line is significantly higher than that of the area corresponding to the image (d).
In order to visually evaluate the advantages of the algorithm provided by the invention, three classical wavelet transform-based image fusion methods (Sobel algorithm, Prewitt algorithm and Laplace algorithm) are selected for comparison with the algorithm provided by the invention, and three groups of light field original images (Flower, Forest and Zither) are adopted as experimental data.
Fig. 6, 7, and 8 show the results of the corresponding experiments: the (a) and (b) of each image are refocused images obtained when the three light field original images correspond to alpha being 1 and alpha being 2 respectively, and the focusing depth is converted from the foreground to the background; FIGS. 6-8 (c), (d), (e), (f) are full focus images obtained by Sobel algorithm, Prewitt algorithm, conventional Laplace algorithm, and the algorithm of the present invention. From the visual effect, the definition of the fused image obtained by the Sobel algorithm and the Prewitt algorithm in the rectangular area framed by the dotted line in the image is obviously inferior to that of the fused image obtained by the Sobel algorithm and the Prewitt algorithm in the algorithm of the invention; the definition of the Sobel algorithm and the Prewitt algorithm in the area enclosed by the dotted line in FIG. 7 is obviously inferior to that of the algorithm of the present invention; FIG. 8 shows that the clarity of the area (check the position of the dashed box) framed by the dashed line corresponding to the plant leaves fused by the Prewitt algorithm is also significantly inferior to the algorithm of the present invention; the light field full-focusing image fusion method provided by the invention has certain advantages in visual effect.
In addition, in consideration of the visual limitation of human eyes, the invention further selects objective evaluation indexes to evaluate the image quality, and verifies the superiority of the algorithm. The method selects information entropy (E), Average Gradient (AG), image definition (FD) and Edge Intensity (EI) as evaluation indexes respectively, and evaluates the quality of the full focus image obtained by a plurality of methods in the images 6, 7 and 8.
Where E is a physical quantity that measures the size of information, and a larger value indicates a larger amount of image information. The higher the value of the contrast ability of AG sensitive response image to the tiny details, the stronger its ability to represent it. FD represents the degree of image clarity, and the higher the value, the better the clarity. The EI reflects the edge intensity of the image, and the higher the value, the sharper the representative image edge, and the results of the specific evaluation index correspondence are shown in tables 1,2, and 3.
The data in the comparison table show that the algorithm is superior to other three traditional wavelet transformation methods in four objective evaluation indexes of the image, and the feasibility and the effectiveness of the algorithm are reflected.
TABLE 1 Flower sample image different fusion algorithm performance index comparison
Table 1 Comparison ofperformance indexes ofdifferentfusion algorithms forFlower sample images
E | FD | AG | EI | |
Sobel algorithm | 6.8676 | 6.8991 | 6.2470 | 66.5340 |
Prewitt algorithm | 6.8634 | 6.3270 | 5.8326 | 62.6420 |
Laplace algorithm | 6.8830 | 7.6837 | 6.8668 | 72.3073 |
Algorithm of the invention | 6.8896 | 7.8498 | 7.0055 | 73.7203 |
TABLE 2 Cucurbit sample image different fusion algorithm performance index comparison
Table 2 Comparison ofperformance indexes ofdifferent fusion algorithms for Cucurbit sample images
E | FD | AG | EI | |
Sobel algorithm | 5.7544 | 2.9136 | 2.5328 | 26.5157 |
Prewitt algorithm | 5.7492 | 2.5766 | 2.2905 | 24.2735 |
Laplace algorithm | 5.8011 | 3.5235 | 3.0018 | 31.0134 |
Algorithm of the invention | 5.8099 | 3.6305 | 3.0875 | 31.9033 |
TABLE 3 comparison of performance indicators for different fusion algorithms for Zither sample images
Table 3 Comparison ofperformance indexes ofdifferent fusion algorithms forZither sample images
E | FD | AG | EI | |
Sobel algorithm | 6.2935 | 5.1865 | 4.4675 | 48.3854 |
Prewitt algorithm | 6.2695 | 4.5182 | 4.0184 | 43.7566 |
Laplace algorithm | 6.2716 | 5.6773 | 4.8649 | 52.4474 |
Algorithm of the invention | 6.2987 | 6.2987 | 4.9425 | 53.1501 |
The invention completes the calculation from the light field original image to the full focus image, realizes the fusion of the full focus image by adopting the method based on the wavelet domain definition evaluation, and avoids the block effect caused by the traditional spatial domain image fusion algorithm. Firstly, spatial transformation and projection are carried out on 4D light field data obtained by decoding to obtain multi-focus images for full-focus image fusion, and then wavelet decomposition and tower-type reconstruction are carried out on each multi-focus image set to construct high-frequency and low-frequency sub-image sets so as to realize image fusion. In the fusion of wavelet high-frequency sub-images, the invention provides a definition evaluation function based on a region-balanced Laplace operator; in wavelet low-frequency sub-image fusion, the invention provides a definition evaluation function based on pixel visibility to improve the fusion quality of a full-focus image. The experiment shows that compared with the traditional algorithm based on wavelet transformation, the method provided by the invention improves the final fused image from subjective vision to objective indexes.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.
Claims (2)
1. The method for generating the full-focusing image of the wavelet domain light field is characterized by comprising the following steps of:
step 1): decoding the data of the original image of the light field to obtain a 4D light field, and selecting different alphanN is 1,2, 3. cndot. N, and the digital refocusing technique is used to obtain refocused images with different spatial depths
Step 3) respectively adopting a region-balanced Laplacian BL operator and a pixel visibility PV function as image fusion definition evaluation indexes for high-frequency and low-frequency sub-images to realize the fusion of high-frequency and low-frequency coefficients;
the BL operator is a region-balanced Laplace operator, and the expression of the BL operator is as follows:
wherein sxt represents equalization region size, and S, T can only take odd numbers; s and t represent second derivative step lengths in the horizontal direction and the vertical direction;representing a weight factor, wherein the closer the point to the central point, the larger the weight factor is, the larger the contribution to the Laplace operator value is, and otherwise, the farther the point to the central point, the smaller the contribution to the Laplace operator value is;
the PV function is a pixel visibility function, and the specific expression is as follows:
the S multiplied by T represents a rectangular neighborhood taking the current pixel point as the center, and S, T can only take odd numbers; s and t represent scanning steps in the horizontal and vertical directions in the rectangular neighborhood;representing the average gray value of the pixels in the S multiplied by T area;
and 4) performing inverse wavelet transform on the high and low frequency coefficients to obtain a fused full focus image.
2. The wavelet domain light field full focus image generation method according to claim 1, wherein: the low-frequency coefficient and the high-frequency coefficient are fused according to the same rule, and the rule is as follows by taking the high-frequency coefficient as an example:
wherein,representing respective high-frequency sub-images of refocusing images with different spatial depths after wavelet decomposition, wherein N is 1,2, 3. N, and N represents the frame number of the refocusing images participating in the fusion of the full-focus images;representing the difference value of the balanced Laplacian of the corresponding point of any 2 high-frequency subimages; max [. C]、min[·]The maximum value and the minimum value are taken; hHThe method is characterized in that a threshold is defined by self, when the minimum value of a difference value is larger than the threshold, a high-frequency coefficient corresponding to the maximum equalized Laplace energy in N frames of images is taken as a fusion coefficient, when the difference value of the two is smaller than the threshold, the final fusion coefficient is determined by multiplying the high-frequency coefficient of a plurality of frames of images by a weight factor, wherein the weight factor is used for determining the final fusion coefficient
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811259275.1A CN109447930B (en) | 2018-10-26 | 2018-10-26 | Wavelet domain light field full-focusing image generation algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811259275.1A CN109447930B (en) | 2018-10-26 | 2018-10-26 | Wavelet domain light field full-focusing image generation algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109447930A CN109447930A (en) | 2019-03-08 |
CN109447930B true CN109447930B (en) | 2021-08-20 |
Family
ID=65547793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811259275.1A Active CN109447930B (en) | 2018-10-26 | 2018-10-26 | Wavelet domain light field full-focusing image generation algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109447930B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330757B (en) * | 2019-08-05 | 2022-11-29 | 复旦大学 | Complementary color wavelet measurement for evaluating color image automatic focusing definition |
CN110662014B (en) * | 2019-09-25 | 2020-10-09 | 江南大学 | Light field camera four-dimensional data large depth-of-field three-dimensional display method |
CN111145134B (en) * | 2019-12-24 | 2022-04-19 | 太原科技大学 | Block effect-based microlens light field camera full-focus image generation algorithm |
CN112132771B (en) * | 2020-11-02 | 2022-05-27 | 西北工业大学 | Multi-focus image fusion method based on light field imaging |
CN112801913A (en) * | 2021-02-07 | 2021-05-14 | 佛山中纺联检验技术服务有限公司 | Method for solving field depth limitation of microscope |
CN113487526B (en) * | 2021-06-04 | 2023-08-25 | 湖北工业大学 | Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients |
CN116847209B (en) * | 2023-08-29 | 2023-11-03 | 中国测绘科学研究院 | Log-Gabor and wavelet-based light field full-focusing image generation method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5952957A (en) * | 1998-05-01 | 1999-09-14 | The United States Of America As Represented By The Secretary Of The Navy | Wavelet transform of super-resolutions based on radar and infrared sensor fusion |
CN101877125A (en) * | 2009-12-25 | 2010-11-03 | 北京航空航天大学 | Wavelet domain statistical signal-based image fusion processing method |
CN108537756A (en) * | 2018-04-12 | 2018-09-14 | 大连理工大学 | Single image to the fog method based on image co-registration |
CN108581869A (en) * | 2018-03-16 | 2018-09-28 | 深圳市策维科技有限公司 | A kind of camera module alignment methods |
-
2018
- 2018-10-26 CN CN201811259275.1A patent/CN109447930B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5952957A (en) * | 1998-05-01 | 1999-09-14 | The United States Of America As Represented By The Secretary Of The Navy | Wavelet transform of super-resolutions based on radar and infrared sensor fusion |
CN101877125A (en) * | 2009-12-25 | 2010-11-03 | 北京航空航天大学 | Wavelet domain statistical signal-based image fusion processing method |
CN108581869A (en) * | 2018-03-16 | 2018-09-28 | 深圳市策维科技有限公司 | A kind of camera module alignment methods |
CN108537756A (en) * | 2018-04-12 | 2018-09-14 | 大连理工大学 | Single image to the fog method based on image co-registration |
Non-Patent Citations (2)
Title |
---|
Multifocus image fusion scheme using focused region detection;Y. Chai;《Optics Communications》;20110901;全文 * |
区域清晰度的小波变换图像融合算法研究;叶明;《电子测量与仪器学报》;20150930;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109447930A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447930B (en) | Wavelet domain light field full-focusing image generation algorithm | |
CN106408524B (en) | Depth image enhancement method based on two-dimensional image assistance | |
CN110570353B (en) | Super-resolution reconstruction method for generating single image of countermeasure network by dense connection | |
Liu et al. | Wavelet-based dual-branch network for image demoiréing | |
CN108074218B (en) | Image super-resolution method and device based on light field acquisition device | |
CN111145134B (en) | Block effect-based microlens light field camera full-focus image generation algorithm | |
CN103201765A (en) | Method and device for recovering a digital image from a sequence of observed digital images | |
CN105825472A (en) | Rapid tone mapping system and method based on multi-scale Gauss filters | |
CN113256772B (en) | Double-angle light field high-resolution reconstruction system and method based on visual angle conversion | |
CN109949256B (en) | Astronomical image fusion method based on Fourier transform | |
CN111583113A (en) | Infrared image super-resolution reconstruction method based on generation countermeasure network | |
CN116847209B (en) | Log-Gabor and wavelet-based light field full-focusing image generation method and system | |
Singh et al. | Weighted least squares based detail enhanced exposure fusion | |
CN109801273B (en) | Light field image quality evaluation method based on polar plane linear similarity | |
CN108615221B (en) | Light field angle super-resolution method and device based on shearing two-dimensional polar line plan | |
CN112488920B (en) | Image regularization super-resolution reconstruction method based on Gaussian-like fuzzy core | |
CN117611456A (en) | Atmospheric turbulence image restoration method and system based on multiscale generation countermeasure network | |
CN110009575B (en) | Infrared image stripe noise suppression method based on sparse representation | |
CN103873773A (en) | Primary-auxiliary synergy double light path design-based omnidirectional imaging method | |
CN113393379B (en) | High-resolution imaging method for large F number diffraction real-time correction based on pixel coding | |
Kim et al. | Light field angular super-resolution using convolutional neural network with residual network | |
CN112991174A (en) | Method and system for improving resolution of single-frame infrared image | |
CN108681988B (en) | Robust image resolution enhancement method based on multiple images | |
CN111951159A (en) | Processing method for super-resolution of light field EPI image under strong noise condition | |
Xu et al. | Real-time panoramic map modeling method based on multisource image fusion and three-dimensional rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |