CN110288558B - Super-depth-of-field image fusion method and terminal - Google Patents

Super-depth-of-field image fusion method and terminal Download PDF

Info

Publication number
CN110288558B
CN110288558B CN201910561628.1A CN201910561628A CN110288558B CN 110288558 B CN110288558 B CN 110288558B CN 201910561628 A CN201910561628 A CN 201910561628A CN 110288558 B CN110288558 B CN 110288558B
Authority
CN
China
Prior art keywords
frequency information
low
synthesized
image
information set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910561628.1A
Other languages
Chinese (zh)
Other versions
CN110288558A (en
Inventor
陈兵
邹兴文
逄宗元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XINTU PHOTONICS Co.,Ltd.
Original Assignee
Xintu Photonics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xintu Photonics Co ltd filed Critical Xintu Photonics Co ltd
Priority to CN201910561628.1A priority Critical patent/CN110288558B/en
Publication of CN110288558A publication Critical patent/CN110288558A/en
Application granted granted Critical
Publication of CN110288558B publication Critical patent/CN110288558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a fusion method and a fusion terminal of super depth of field images, which are characterized in that an image to be fused is split through a Laplacian pyramid to obtain a high-frequency information set and a low-frequency information set, the low-frequency information set is subjected to guiding filtering processing to obtain synthesized low-frequency information, and the synthesized high-frequency information and low-frequency information are reconstructed to obtain the super depth of field images.

Description

Super-depth-of-field image fusion method and terminal
Technical Field
The invention relates to the field of image processing, in particular to a super-depth-of-field image fusion method and a terminal.
Background
The multi-focus image fusion is an important branch of the image fusion technology and is mainly used for processing the imaged picture. When a certain scene is imaged, due to the limited focusing range of the optical system, the general optical imaging system is difficult to form clear images on objects at different distances in the scene. When the focal point of the imaging system is focused on an object, it can form a sharp image on the image plane. At this time, the image formed on the image plane by the object located at other position will show different degrees of blurring. Therefore, the imaging mechanism of the optical lens makes the imaging system improve the resolution continuously, and the influence of the limited focusing range on the whole effect of the imaged picture cannot be avoided, that is, it is difficult to obtain clear images of all objects in the same scene only by means of the imaging system. In order to more fully and truly reflect the information of a scene, it is desirable to obtain a clear image of all objects in the scene. One method for solving the problem is to focus different objects in a scene respectively to obtain a plurality of multi-focus images of the scene, then fuse the multi-focus images, and extract respective clear areas, thereby obtaining a fused image in which all the objects in the scene are clear. The multi-focus image fusion technology enables objects with different imaging distances to be clearly presented in one image, and lays a good foundation for processing such as feature extraction and image recognition, so that the utilization rate of image information is effectively improved, and a system detects and recognizes targets. However, a fused image obtained by the existing multi-focus image fusion method has a small particle block problem, so that the obtained fused image has certain difference with an original image.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method and the terminal for fusing the super-depth-of-field images are provided, the problem that small particle blocks exist in fused images is thoroughly solved, and the fused images are closer to original images.
In order to solve the technical problems, the invention adopts a technical scheme that:
a method for fusing super-depth images comprises the following steps:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different;
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a super-depth-of-field image fusion terminal comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the following steps:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different;
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image.
The invention has the beneficial effects that: the method comprises the steps of splitting an image to be fused through a Laplace pyramid to obtain a high-frequency information set and a low-frequency information set, conducting guided filtering on the low-frequency information set to obtain synthesized low-frequency information, and reconstructing the synthesized high-frequency information and the synthesized low-frequency information to obtain a super depth-of-field image.
Drawings
Fig. 1 is a flowchart illustrating steps of a super-depth-of-field image fusion method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a super-depth-of-field image fusion terminal according to an embodiment of the present invention;
description of reference numerals:
1. a fusion terminal of super field depth images; 2. a memory; 3. a processor.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, a method for fusing super depth-of-field images includes the steps of:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different;
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image.
From the above description, the beneficial effects of the present invention are: the method comprises the steps of splitting an image to be fused through a Laplace pyramid to obtain a high-frequency information set and a low-frequency information set, conducting guided filtering on the low-frequency information set to obtain synthesized low-frequency information, and reconstructing the synthesized high-frequency information and the synthesized low-frequency information to obtain a super depth-of-field image.
Further, the step S1 includes:
extracting characteristic points of each image in the image sequence through a surf matching algorithm, and screening out a preset number of matching points;
calculating surf feature description of a preset dimension according to the preset number of matching points, and performing rough matching between images according to the surf feature description;
and calculating a transition matrix between the roughly matched images through a ransac algorithm, and aligning the corresponding images according to the transition matrix.
According to the description, the surf matching algorithm is adopted to extract the image feature points, calculate the feature description of the feature points, perform rough matching, and calculate the transition matrix between the images after rough matching through the ransac algorithm, so that the matching of the images to be fused is realized, the accurate matching between the images can be realized, and the accuracy of subsequent fusion is improved.
Further, the step S3 of obtaining the synthesized high-frequency information according to the high-frequency information set, and performing the guided filtering process on the low-frequency information set to obtain the synthesized low-frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information set as synthesized high-frequency information;
calculating the weight corresponding to each low-frequency information in the low-frequency information set by adopting a guided filtering method;
and weighting and summing each low-frequency information in the low-frequency information set and the corresponding weight thereof to obtain the synthesized low-frequency information.
According to the above description, the high-frequency information with the largest absolute value is selected as the synthesized high-frequency information, the weight corresponding to each low-frequency information in the low-frequency information set is calculated by adopting a guided filtering method, each low-frequency information is weighted based on the weight, and the synthesized low-frequency information is obtained, so that the small particle phenomenon in low-frequency synthesis can be prevented, the small particles in the fused image are avoided, the image synthesized by the super-depth of field is clear, fine and transparent, and more detailed information can be programmed.
Further, after the step S3 of performing the guided filtering process on the low frequency information set to obtain the synthesized low frequency information, the method further includes:
and performing region growth of a preset neighborhood on the synthesized low-frequency information, judging whether the region of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value, and if so, removing the pixel point.
From the above description, it can be determined whether the synthesized low-frequency information has a "hole" by the region growing method, and if so, the "hole" is removed, so that an isolated small region can be removed, and the integrity of the fused image is ensured.
Further, the step S2 includes:
respectively carrying out Gaussian filtering on each image in the aligned image sequence according to a preset level, extracting high-frequency information of each layer of each image and low-frequency information of the highest layer of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
in step S3, selecting the high frequency information with the largest absolute value in the high frequency information set as the synthesized high frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information of each layer in the high-frequency information set as the high-frequency information after the layer synthesis;
in the step S3, the performing laplacian pyramid reconstruction according to the synthesized high-frequency information and low-frequency information to obtain a super depth-of-field image includes:
and performing the following recursion on the synthesized low-frequency information from the highest layer to the bottom: and after the low-frequency information is up-sampled and subjected to Gaussian filtering, adding the high-frequency information of the corresponding level to serve as the low-frequency information of the next level.
As can be seen from the above description, by performing laplacian pyramid splitting and reconstruction of a preset level on an image sequence to be fused, features and details on different frequency bands of different decomposition layers can be extracted and displayed, features and details from different images can be fused together, and the fusion effect is good.
Referring to fig. 2, a super-depth-of-field image fusion terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the following steps:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different;
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image.
From the above description, the beneficial effects of the present invention are: the method comprises the steps of splitting an image to be fused through a Laplace pyramid to obtain a high-frequency information set and a low-frequency information set, conducting guided filtering on the low-frequency information set to obtain synthesized low-frequency information, and reconstructing the synthesized high-frequency information and the synthesized low-frequency information to obtain a super depth-of-field image.
Further, the step S1 includes:
extracting characteristic points of each image in the image sequence through a surf matching algorithm, and screening out a preset number of matching points;
calculating surf feature description of a preset dimension according to the preset number of matching points, and performing rough matching between images according to the surf feature description;
and calculating a transition matrix between the roughly matched images through a ransac algorithm, and aligning the corresponding images according to the transition matrix.
According to the description, the surf matching algorithm is adopted to extract the image feature points, calculate the feature description of the feature points, perform rough matching, and calculate the transition matrix between the images after rough matching through the ransac algorithm, so that the matching of the images to be fused is realized, the accurate matching between the images can be realized, and the accuracy of subsequent fusion is improved.
Further, the step S3 of obtaining the synthesized high-frequency information according to the high-frequency information set, and performing the guided filtering process on the low-frequency information set to obtain the synthesized low-frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information set as synthesized high-frequency information;
calculating the weight corresponding to each low-frequency information in the low-frequency information set by adopting a guided filtering method;
and weighting and summing each low-frequency information in the low-frequency information set and the corresponding weight thereof to obtain the synthesized low-frequency information.
According to the above description, the high-frequency information with the largest absolute value is selected as the synthesized high-frequency information, the weight corresponding to each low-frequency information in the low-frequency information set is calculated by adopting a guided filtering method, each low-frequency information is weighted based on the weight, and the synthesized low-frequency information is obtained, so that the small particle phenomenon in low-frequency synthesis can be prevented, the small particles in the fused image are avoided, the image synthesized by the super-depth of field is clear, fine and transparent, and more detailed information can be programmed.
Further, after the step S3 of performing the guided filtering process on the low frequency information set to obtain the synthesized low frequency information, the method further includes:
and performing region growth of a preset neighborhood on the synthesized low-frequency information, judging whether the region of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value, and if so, removing the pixel point.
From the above description, it can be determined whether the synthesized low-frequency information has a "hole" by the region growing method, and if so, the "hole" is removed, so that an isolated small region can be removed, and the integrity of the fused image is ensured.
Further, the step S2 includes:
respectively carrying out Gaussian filtering on each image in the aligned image sequence according to a preset level, extracting high-frequency information of each layer of each image and low-frequency information of the highest layer of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
in step S3, selecting the high frequency information with the largest absolute value in the high frequency information set as the synthesized high frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information of each layer in the high-frequency information set as the high-frequency information after the layer synthesis;
in the step S3, the performing laplacian pyramid reconstruction according to the synthesized high-frequency information and low-frequency information to obtain a super depth-of-field image includes:
and performing the following recursion on the synthesized low-frequency information from the highest layer to the bottom: and after the low-frequency information is up-sampled and subjected to Gaussian filtering, adding the high-frequency information of the corresponding level to serve as the low-frequency information of the next level.
As can be seen from the above description, by performing laplacian pyramid splitting and reconstruction of a preset level on an image sequence to be fused, features and details on different frequency bands of different decomposition layers can be extracted and displayed, features and details from different images can be fused together, and the fusion effect is good.
Example one
Referring to fig. 1, a method for fusing super depth-of-field images includes the steps of:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different;
specifically, feature points of each image in the image sequence are extracted through a surf matching algorithm, and a preset number of matching points are screened out;
calculating surf feature description of a preset dimension according to the preset number of matching points, and performing rough matching between images according to the surf feature description;
calculating a transition matrix between the images after rough matching through a ransac algorithm, and aligning the corresponding images according to the transition matrix;
preferably, all feature points of each image can be extracted through a surf matching algorithm, 500 matching points are screened out, and 64-dimensional surf feature description is calculated according to the 500 matching points;
performing coarse matching between images according to the surf feature description, wherein the coarse matching adopts nearest neighbor coarse matching;
finally, calculating a transition matrix between the roughly matched images through a ransac algorithm, and aligning the corresponding images through the transition matrix;
during image alignment, the above-mentioned alignment process may be performed between two images, so that all images in the image sequence are aligned;
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
specifically, Gaussian filtering is respectively carried out on each image in the aligned image sequence according to a preset level, high-frequency information of each layer of each image and low-frequency information of the highest layer of each image are extracted, and a high-frequency information set and a low-frequency information set corresponding to the image sequence are obtained;
the specific implementation operation is as follows:
s2.1, assuming that the original image A is taken as the bottommost layer image LA0(layer 0 of the laplacian pyramid), convolved with a gaussian kernel W to obtain an image GA0
S2.2, mixing LA0Subtract GA0Obtaining high-frequency information HA of layer 00
S2.3, mixing GA0Downsampling (removing even rows and columns) to obtain the previous layer image LA1(layer 1 of the laplacian pyramid), repeating S2.1 and S2.2 to obtain high-frequency information HA of each layer0、HA1,……,HANAnd the low frequency information LA of the highest layerNWherein N is a preset hierarchy;
the decomposition of each image can be realized through the steps S2.1 to S2.3, and M pieces of high-frequency information HA corresponding to the M images can be obtained if M images exist0、HA1,……,HANAnd the low frequency information LA of the highest layerNForming a high frequency information set and a low frequency information set corresponding to the image sequence;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image;
in step S3, the obtaining of the synthesized high-frequency information according to the high-frequency information set includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information set as synthesized high-frequency information;
specifically, the high-frequency information with the largest absolute value in the high-frequency information of each layer in the high-frequency information set is selected as the high-frequency information after the layer synthesis;
assuming that there are two images in total, the high frequency information set obtained is { HA0,HA1,……,HAN,HB0,HB1,……,HBNH, the synthesized high-frequency information is { H }0,H1,……,HnH, which includes N levels, each level having corresponding synthesized high-frequency information, Hi(m,n)=max(HAi(m,n),HBi(m, N)), i ═ 1,2, … …, N, (m, N) denotes the pixel point position;
preferably, 5-layer laplacian pyramid decomposition may be performed, and a gaussian filter with a window of 5 and σ of 1 is performed on the original image:
Figure BDA0002108440220000091
obtaining high-frequency information of a corresponding layer, subtracting the filtered image from the original image to be used as the high-frequency information of the corresponding layer, and extracting the filtered image in an interlaced and alternate manner to be used as an input image of the next layer;
performing Gaussian filtering with a window of 3 and sigma 1 on the high-frequency information of each layer, and then selecting the high-frequency information with the largest absolute value as the synthesized high-frequency information of each layer;
the step of performing guided filtering processing on the low-frequency information set to obtain synthesized low-frequency information comprises:
calculating the weight corresponding to each low-frequency information in the low-frequency information set by adopting a guided filtering method;
weighting and summing each low-frequency information in the low-frequency information set and the corresponding weight thereof to obtain synthesized low-frequency information;
specifically, assuming that there are two pictures in total, the low-frequency information LA of the highest layer of each picture is obtained by pyramid decompositionNAnd LBN
Calculating LA by adopting guide filtering modeNAnd LBNThe synthesized weight W1And W2Then the synthesized low frequency information LN=W1*LAN+W2*LBN
The calculation process of the guided filtering mode is as follows:
will LANConsidering as an input image P, the weight W is calculated with G as a guide map1And W2Wherein G (m, n) ═ max (LA)N(m,n),LBN(m, n)), (m, n) denoting the pixel location;
setting a guide image G, inputting an image P and outputting an image Q; the goal of guided filtering is: making the input P and output Q as identical as possible, while the texture part is similar to the guide map G;
to meet the first objective, to make the input P and output Q as similar as possible, it is desirable to minimize the squared difference min (Q-P)2
To satisfy the second object, it is required that the texture of the output image Q is similar to that of the guide map G
Figure BDA0002108440220000101
Integrating to obtain Q ═ alpha G + b;
consider a small window WkIn WkThe internal assumption is that alpha, b remains unchanged and is set as alphak,bk
WkInner pixel satisfy
qi=αkgi+bk,i∈Wk (1)
Substituting (1) into the first target, so that the pixels in the window satisfy the above two conditions simultaneously:
Figure BDA0002108440220000102
where ε is a penalized large αkPreferably, epsilon is 0.01, and the guide window is 3;
to minimize (2), satisfy
Figure BDA0002108440220000103
Figure BDA0002108440220000111
Figure BDA0002108440220000112
Where | W | is the window WkThe total number of pixels. Get it solved
Figure BDA0002108440220000113
Figure BDA0002108440220000114
Let pkIs to input a picture P in a window WkAverage value of (d), μkAnd
Figure BDA0002108440220000119
is to guide the drawing G in the window WkMean and variance of, then
Figure BDA0002108440220000115
Figure BDA0002108440220000116
Wherein the content of the first and second substances,
Figure BDA0002108440220000117
is the guide graph G and the input graph P at WkAn inner covariance;
calculating alphak,bkThen, the window W can be calculated according to (1)kThe output pixel of (1);
for a pixel i, the output value qiAnd all windows W covering the pixels ikRelated, therefore when WkDifferent from qiAre also different, a simple strategy is to average all possible qiA value;
all windows W covering i are calculatedkAlpha of (A)k,bkAll windows W covering the pixel ikIs | W |, then
Figure BDA0002108440220000118
Figure BDA0002108440220000121
Wherein
Figure BDA0002108440220000122
Specifically, when the guide map G is the same as the input image P, the guide filter edge occurrence maintains a smooth characteristic, which is analyzed as follows:
when G ═ P, it is clear
Figure BDA0002108440220000123
Obtained from the formulae (5) and (6)
Figure BDA0002108440220000124
bk=μk(1-αk);
When ε is equal to 0, αk=1,bk0, i.e. the output is the same as the input image; if epsilon>0, consider two cases:
first, high variance: if the image P is in the window WkIn a number of variations, then
Figure BDA0002108440220000125
Having ak≈1,bk≈0;
Second, flat block: then
Figure BDA0002108440220000126
Having ak≈0,bk≈μk(ii) a If the whole input image is like window WkIs likely to be very flat when ak,bkIs averaged to obtain alphak≈0,bk≈μk,qi≈μk
Thus, when a pixel is in a window of high variance, its output value is constant, in the flat region, its output value becomes the average of the surrounding window pixels, specifically, the criteria of high variance and flat are controlled by a parameter ε, if the window variance is much smaller than this parameter then it is smoothed then the variance is much larger and the window size determines how large the surrounding range of pixels is referenced to calculate the variance and mean;
in this case, the output image Q can be calculated by calculating the parameters of the guided filtering according to equations (5) to (8);
the filtering result of the oriented filtering at the pixel point i can be expressed as a weighted average
qi=∑jWij(G)pj (9)
Wherein i, j are both pixel indices;
filter weight WijIs a function of the pilot graph G and is independent of P;
the filter weights are calculated by substituting (6) into (8) and eliminating b to obtain:
Figure BDA0002108440220000131
calculating a partial derivative:
Figure BDA0002108440220000132
wherein the content of the first and second substances,
Figure BDA0002108440220000133
when j is not in the window WkWhen the temperature of the water is higher than the set temperature,
Figure BDA0002108440220000134
is 0;
Figure BDA0002108440220000135
Figure BDA0002108440220000136
bringing (12) and (13) into (11) to obtain
Figure BDA0002108440220000137
I.e. the weight of the output image
Figure BDA0002108440220000138
So that the image Q is outputij=Wij×Pij
Wherein, WijIs the weight corresponding to the pixel point (i, j) in the low frequency information (i.e., the input image); e.g. with low frequency information LANAnd LBNAs the input image, the input image LA is calculated by the above-mentioned guiding filtering methodNAnd LBNCorresponding weight WA of each pixel point inijAnd WBijThen weighted and summed to obtain input image LANAnd LBNIs QA to WAij*LANij,QB=WBij*LBNijFinally synthesized low frequency information LNQA + QB; the performing laplacian pyramid reconstruction according to the synthesized high-frequency information and low-frequency information to obtain a super-depth-of-field image includes:
and performing the following recursion on the synthesized low-frequency information from the highest layer to the bottom: performing up-sampling and Gaussian filtering on the low-frequency information, and adding the high-frequency information of the corresponding level as the low-frequency information of the next level;
specifically, the synthesized low-frequency information is up-sampled, with a 2-fold window of 5 and a 1-sigma gaussian filter, and then the synthesized top-most high-frequency H is addedNTo obtain the input G of the next layerN-1
Then to GN-1Upsampling is performed, a 2-time window is 5, sigma is 1 Gaussian filtering, and then the high frequency H of the synthesized N-1 layer is addedN-1To obtain the input G of the next layerN-2And the upper layer of the pyramid is recurred to the lower layer of the pyramid, and finally the fused super-depth-of-field image which is as large as the input image is obtained.
Example two
The difference between the present embodiment and the first embodiment is:
in step S3, after the performing the guided filtering process on the low frequency information set to obtain the synthesized low frequency information, the method further includes:
performing region growth of a preset neighborhood on the synthesized low-frequency information, judging whether a region of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value, and if so, removing the pixel point;
preferably, the synthesized low-frequency information is subjected to region growth of 4 neighborhoods, and if the region of each point after growth is less than 10000 pixel points, the point is judged to be a cavity, and the cavity is removed.
Specifically, after calculating the guiding filter of the low-frequency information, the weights W1 and W2 of the two pieces of low-frequency information are calculated respectively, and the W1 and W2 of each pixel point (i, j) are compared to obtain a matrix C, wherein the matrix C is obtained
Figure BDA0002108440220000141
Four-neighborhood region growing for each point of matrix C with value 1, e.g. CijComparing whether four points of upper (A), lower (B), left (C) and right (D) are 1, judging whether the points of 1 are the points of 1, counting the number N of the points of 1 until reaching the boundary of the matrix C, and if N is less than 10000, counting the number N of the points of 1ijIs a void, and is removed.
EXAMPLE III
Referring to fig. 2, a super-depth-of-field image fusion terminal 1 includes a memory 2, a processor 3, and a computer program stored in the memory 1 and executable on the processor 3, where the processor 3 implements the steps in the first embodiment when executing the computer program.
Example four
Referring to fig. 2, a super-depth-of-field image fusion terminal 1 includes a memory 2, a processor 3, and a computer program stored in the memory 1 and executable on the processor 3, where the processor 3 implements the steps in the second embodiment when executing the computer program.
In summary, according to the fusion method and the terminal for the super depth-of-field image provided by the invention, the image to be fused is split through the laplacian pyramid to obtain the high-frequency information set and the low-frequency information set, the high-frequency information set is synthesized by adopting the method of obtaining the maximum absolute value, the low-frequency information set is subjected to the guided filtering processing to obtain the weight of the low-frequency information, the low-frequency information is weighted and synthesized, the synthesized low-frequency information is subjected to the region growing method to remove the isolated small region, the synthesized high-frequency information and low-frequency information are reconstructed to obtain the super depth-of-field image, the problems that a large amount of particles and water stains appear in the depth-of-field synthesized image in the existing depth-of-field fusion method are solved, the small particle blocks in the fusion image are thoroughly eliminated, the obtained super depth-of-field image is closer to the original image, and the fused super depth-of-field image is clear, Fine and transparent, and can present more detailed information.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (6)

1. A method for fusing super-depth images is characterized by comprising the following steps:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different, and the image sequence comprises two images to be fused { A, B };
s2, respectively carrying out Laplacian pyramid splitting on each image in the aligned image sequence, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence, wherein the high-frequency information set is { HA }0,HA1,……,HAN,HB0,HB1,……,HBNThe low-frequency information is set as { LA }N,LBNN is a preset number of layers;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image;
in step S3, the obtaining of the synthesized high-frequency information according to the high-frequency information set, and the performing of the guided filtering process on the low-frequency information set to obtain the synthesized low-frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information set as synthesized high-frequency information;
calculating the weight corresponding to each pixel point corresponding to each low-frequency information in the low-frequency information set by adopting a guided filtering method;
weighting each low-frequency information in the low-frequency information set and the corresponding weight of each pixel point to obtain synthesized low-frequency information;
in step S3, after the performing the guided filtering process on the low frequency information set to obtain the synthesized low frequency information, the method further includes:
performing region growth of a preset neighborhood on the synthesized low-frequency information, judging whether a region of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value, and if so, removing the pixel point;
the area growth of a preset neighborhood is carried out on the synthesized low-frequency information, whether the area of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value or not is judged, and if yes, the pixel point is removed, wherein the area of the synthesized low-frequency information after growth is judged to comprise:
the weight W1 corresponding to each pixel point (i, j) of the low-frequency information of the two images after the guide filteringijAnd W2ijComparing to obtain each point C in the matrix CijThe value of (a), wherein,
Figure FDA0003105799510000021
performing four-neighborhood region growth on each point with the value of 1 of the matrix C, comparing whether the upper, lower, left and right points of the matrix C are 1, then judging the points with the value of 1, the upper, lower, left and right points of 1 until the boundary of the matrix C is reached, counting the number of the points with the value of 1, and if the number of the points is less than the preset value, counting the point C in the synthesized low-frequency informationijAnd removing the corresponding pixel points which are holes.
2. The method for fusing super depth of field images according to claim 1, wherein the step S1 comprises:
extracting characteristic points of each image in the image sequence through a surf matching algorithm, and screening out a preset number of matching points;
calculating surf feature description of a preset dimension according to the preset number of matching points, and performing rough matching between images according to the surf feature description;
and calculating a transition matrix between the roughly matched images through a ransac algorithm, and aligning the corresponding images according to the transition matrix.
3. The method for fusing super depth of field images according to claim 1, wherein the step S2 comprises:
respectively carrying out Gaussian filtering on each image in the aligned image sequence according to a preset level, extracting high-frequency information of each layer of each image and low-frequency information of the highest layer of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
in step S3, selecting the high frequency information with the largest absolute value in the high frequency information set as the synthesized high frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information of each layer in the high-frequency information set as the high-frequency information after the layer synthesis;
in the step S3, the performing laplacian pyramid reconstruction according to the synthesized high-frequency information and low-frequency information to obtain a super depth-of-field image includes:
and performing the following recursion on the synthesized low-frequency information from the highest layer to the bottom: and after the low-frequency information is up-sampled and subjected to Gaussian filtering, adding the high-frequency information of the corresponding level to serve as the low-frequency information of the next level.
4. A super-depth-of-field image fusion terminal comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and is characterized in that the processor executes the computer program to realize the following steps:
s1, aligning an image sequence to be fused, wherein the focus point of each image in the image sequence is different, and the image sequence comprises two images to be fused { A, B };
s2, aligning the image sequence after alignmentRespectively splitting each image in the column by using a Laplacian pyramid, extracting high-frequency information and low-frequency information of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence, wherein the high-frequency information set is { HA }0,HA1,……,HAN,HB0,HB1,……,HBNThe low-frequency information is set as { LA }N,LBNN is a preset number of layers;
s3, obtaining synthesized high-frequency information according to the high-frequency information set, conducting guiding filtering processing on the low-frequency information set to obtain synthesized low-frequency information, and conducting Laplacian pyramid reconstruction according to the synthesized high-frequency information and the synthesized low-frequency information to obtain a super field depth image;
in step S3, the obtaining of the synthesized high-frequency information according to the high-frequency information set, and the performing of the guided filtering process on the low-frequency information set to obtain the synthesized low-frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information set as synthesized high-frequency information;
calculating the weight corresponding to each pixel point corresponding to each low-frequency information in the low-frequency information set by adopting a guided filtering method;
weighting each low-frequency information in the low-frequency information set and the corresponding weight of each pixel point to obtain synthesized low-frequency information;
in step S3, after the performing the guided filtering process on the low frequency information set to obtain the synthesized low frequency information, the method further includes:
performing region growth of a preset neighborhood on the synthesized low-frequency information, judging whether a region of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value, and if so, removing the pixel point;
the area growth of a preset neighborhood is carried out on the synthesized low-frequency information, whether the area of each pixel point in the synthesized low-frequency information after growth is smaller than a preset value or not is judged, and if yes, the pixel point is removed, wherein the area of the synthesized low-frequency information after growth is judged to comprise:
the weight W1 corresponding to each pixel point (i, j) of the low-frequency information of the two images after the guide filteringijAnd W2ijComparing to obtain each point C in the matrix CijThe value of (a), wherein,
Figure FDA0003105799510000041
performing four-neighborhood region growth on each point with the value of 1 of the matrix C, comparing whether the upper, lower, left and right points of the matrix C are 1, then judging the points with the value of 1, the upper, lower, left and right points of 1 until the boundary of the matrix C is reached, counting the number of the points with the value of 1, and if the number of the points is less than the preset value, counting the point C in the synthesized low-frequency informationijAnd removing the corresponding pixel points which are holes.
5. The super depth-of-field image fusion terminal according to claim 4, wherein the step S1 includes:
extracting characteristic points of each image in the image sequence through a surf matching algorithm, and screening out a preset number of matching points;
calculating surf feature description of a preset dimension according to the preset number of matching points, and performing rough matching between images according to the surf feature description;
and calculating a transition matrix between the roughly matched images through a ransac algorithm, and aligning the corresponding images according to the transition matrix.
6. The super depth-of-field image fusion terminal according to claim 4, wherein the step S2 includes:
respectively carrying out Gaussian filtering on each image in the aligned image sequence according to a preset level, extracting high-frequency information of each layer of each image and low-frequency information of the highest layer of each image, and obtaining a high-frequency information set and a low-frequency information set corresponding to the image sequence;
in step S3, selecting the high frequency information with the largest absolute value in the high frequency information set as the synthesized high frequency information includes:
selecting the high-frequency information with the maximum absolute value in the high-frequency information of each layer in the high-frequency information set as the high-frequency information after the layer synthesis;
in the step S3, the performing laplacian pyramid reconstruction according to the synthesized high-frequency information and low-frequency information to obtain a super depth-of-field image includes:
and performing the following recursion on the synthesized low-frequency information from the highest layer to the bottom: and after the low-frequency information is up-sampled and subjected to Gaussian filtering, adding the high-frequency information of the corresponding level to serve as the low-frequency information of the next level.
CN201910561628.1A 2019-06-26 2019-06-26 Super-depth-of-field image fusion method and terminal Active CN110288558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561628.1A CN110288558B (en) 2019-06-26 2019-06-26 Super-depth-of-field image fusion method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561628.1A CN110288558B (en) 2019-06-26 2019-06-26 Super-depth-of-field image fusion method and terminal

Publications (2)

Publication Number Publication Date
CN110288558A CN110288558A (en) 2019-09-27
CN110288558B true CN110288558B (en) 2021-08-31

Family

ID=68006167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561628.1A Active CN110288558B (en) 2019-06-26 2019-06-26 Super-depth-of-field image fusion method and terminal

Country Status (1)

Country Link
CN (1) CN110288558B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182954A (en) * 2014-08-27 2014-12-03 中国科学技术大学 Real-time multi-modal medical image fusion method
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN105654503A (en) * 2014-11-11 2016-06-08 无锡清杨机械制造有限公司 Dynamic target detection method based on video images
CN106815827A (en) * 2017-01-18 2017-06-09 聚龙智瞳科技有限公司 Image interfusion method and image fusion device based on Bayer format
CN107316285A (en) * 2017-07-05 2017-11-03 江南大学 The image interfusion method detected towards apple quality
CN108564536A (en) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 A kind of global optimization method of depth map
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530878B (en) * 2013-10-12 2016-01-13 北京工业大学 A kind of edge extracting method based on convergence strategy
CN109166088B (en) * 2018-07-10 2022-01-28 南京理工大学 Dual-waveband gray molten pool image fusion method based on non-downsampling wavelet transform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182954A (en) * 2014-08-27 2014-12-03 中国科学技术大学 Real-time multi-modal medical image fusion method
CN105654503A (en) * 2014-11-11 2016-06-08 无锡清杨机械制造有限公司 Dynamic target detection method based on video images
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN106815827A (en) * 2017-01-18 2017-06-09 聚龙智瞳科技有限公司 Image interfusion method and image fusion device based on Bayer format
CN107316285A (en) * 2017-07-05 2017-11-03 江南大学 The image interfusion method detected towards apple quality
CN108564536A (en) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 A kind of global optimization method of depth map
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method

Also Published As

Publication number Publication date
CN110288558A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
Zhu et al. Joint bi-layer optimization for single-image rain streak removal
Gao et al. Image super-resolution with sparse neighbor embedding
US8498498B2 (en) Apparatus and method of obtaining high resolution image
CN111402146B (en) Image processing method and image processing apparatus
CN105678723B (en) Multi-focus image fusing method based on sparse decomposition and difference image
Wu et al. Demosaicing based on directional difference regression and efficient regression priors
Ma et al. Defocus image deblurring network with defocus map estimation as auxiliary task
CN112184604B (en) Color image enhancement method based on image fusion
CN105894484A (en) HDR reconstructing algorithm based on histogram normalization and superpixel segmentation
Liu et al. Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives
Banerjee et al. In-camera automation of photographic composition rules
Singh et al. Weighted least squares based detail enhanced exposure fusion
CN115731146A (en) Multi-exposure image fusion method based on color gradient histogram feature light stream estimation
KR102466061B1 (en) Apparatus for denoising using hierarchical generative adversarial network and method thereof
Habeeb et al. Contrast enhancement for visible-infrared image using image fusion and sharpen filters
Seo Image denoising and refinement based on an iteratively reweighted least squares filter
Dyomin et al. Two-dimensional representation of a digital holographic image of the volume of a medium with particles as a method of depicting and processing information concerning the particles
JP2009111921A (en) Image processing device and image processing method
Rohith et al. Super-resolution based deep learning techniques for panchromatic satellite images in application to pansharpening
CN112529773B (en) QPD image post-processing method and QPD camera
RU2583725C1 (en) Method and system for image processing
CN110288558B (en) Super-depth-of-field image fusion method and terminal
Geng et al. Cervical cytopathology image refocusing via multi-scale attention features and domain normalization
Mahmood Shape from focus by total variation
Yang et al. A depth map generation algorithm based on saliency detection for 2D to 3D conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No.1, 5th floor, unit 1, building 5, No.399, west section of Fucheng Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Applicant after: Tumaisi (Chengdu) Technology Co.,Ltd.

Address before: No.9, 6th floor, unit 1, building 6, No.399, west section of Fucheng Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Applicant before: NANOMETER VISUAL SENSE (CHENGDU) TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20210803

Address after: 3 / F, building 5, wanwanshe intelligent industrial park, 2 Yangqi Branch Road, Cangshan District, Fuzhou City, Fujian Province, 350000

Applicant after: XINTU PHOTONICS Co.,Ltd.

Address before: No.1, 5th floor, unit 1, building 5, No.399, west section of Fucheng Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Applicant before: Tumaisi (Chengdu) Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant