CN104504673A - Visible light and infrared images fusion method based on NSST and system thereof - Google Patents

Visible light and infrared images fusion method based on NSST and system thereof Download PDF

Info

Publication number
CN104504673A
CN104504673A CN201410849724.3A CN201410849724A CN104504673A CN 104504673 A CN104504673 A CN 104504673A CN 201410849724 A CN201410849724 A CN 201410849724A CN 104504673 A CN104504673 A CN 104504673A
Authority
CN
China
Prior art keywords
inf
vis
prime
frequency sub
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410849724.3A
Other languages
Chinese (zh)
Inventor
邵振峰
杨如红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201410849724.3A priority Critical patent/CN104504673A/en
Publication of CN104504673A publication Critical patent/CN104504673A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a visible light and infrared images fusion method based on NSST and a system thereof; the method comprises the following steps: inputting a visible light image and an infrared image, and implementing a NSST conversion, so as to respectively obtain sub-band coefficients of the visible light image and the infrared image; the sub-band coefficients comprise a low-frequency sub-band coefficient and a high-frequency sub-band coefficient; figuring out a neighborhood average energy of the low-frequency sub-band coefficient, calculating an area energy characteristic value of each subarea, using a weighting strategy based on the area energy characteristic value to calculate the low-frequency sub-band coefficient of a fused image; using a matching strategy based on a four-order correlation coefficient to calculated the high-frequency sub-band coefficient of the fused image; and implementing a NSST reverse conversion to obtain the fused image. The visible light and infrared images fusion method based on NSST and the system thereof can preferably reserve detail information such as the image border and textures, and effectively integrate the target information of the infrared image and the detail information of the visible light image.

Description

Based on the visible ray of NSST and infrared image fusion method and system
Technical field
The invention belongs to image processing data integration technology field, design is a kind of converts the visible ray of (NSST) and infrared image fusion method and system based on non-lower sampling Shearlet.
Background technology
The image of the same target or scene that are derived from different sensors is carried out overall treatment by image fusion technology, eliminate redundancy information and retain complementary information as far as possible, abundanter, the more reliable fused images of the information that can obtain.Infrared is the important application in image co-registration field with visual image fusion, and infrared image characterizes the difference of the outside emittance of special scenes, has well-targeted indicative, but insensitive to brightness change, contrast is lower.Visible images noise is low, sharpness is high, containing more rich detailed information.Therefore these two kinds of images are merged, be conducive to the detailed information that the good target property of comprehensive infrared image and visible images are abundant.
In recent years, multiscale analysis method obtains many achievements in research in image co-registration field.Multiscale analysis method development experience wavelet transformation, contourlet transformation, non-downsampling Contourlet conversion etc.Wavelet transformation can realize optimum expression to the unusual objective function of point, but its direction finiteness fails to represent line singular function preferably, makes it simply to be generalized to two dimension.Contourlet transformation effectively compensate for the defect of wavelet transformation, can represent two dimension even more higher-dimension singularity problem better, but it does not possess translation invariance and easily causes mixing phenomenon.For better realizing translation invariance, the scholars such as A L Cunha propose non-downsampling Contourlet conversion (NSCT), and it has the advantage of the conversion of Contourlet concurrently, and has translation invariance.Compare NSCT, the shearing wave that rose in recent years conversion (shearlet transform, ST) is although fusion method has structure, higher counting yield and more preferably image syncretizing effect more flexibly, but it does not have translation invariance.Non-lower sampling shearing wave conversion (non-subsampled shearlet transform, NSST), as the improved model of ST, has superior image procossing performance.
In image co-registration process, though NSST can complete decomposition and reconstruction task preferably, the design of High-frequency and low-frequency sub-band coefficients fusion rule plays an important role too.Traditional fusion rule and low frequency average weighted and high frequency absolute value are got and fused images contrast will be caused greatly to decline.Deng Chengzhi etc. adopt the weighting coefficient fusion rule based on particle swarm optimization algorithm in low frequency part, consider the difference of different sensors low-frequency component, but directly to take large values method with weighting local energy at HFS, do not consider influencing each other in neighborhood between pixel, and interative computation speed, precision is difficult to hold.The correlativity in neighborhood between pixel considered in leaf legend etc. when processing HFS, adopt the selection strategy based on Local Deviation, and low frequency component adopts the convergence strategy based on Regional Similarity, and syncretizing effect improves to some extent, but it exists certain blur level.
Summary of the invention
The object of the invention is to the shortcoming and defect for prior art, propose a kind of visible ray based on NSST and infrared image integration technology scheme.
The technical solution adopted in the present invention is a kind of visible ray based on NSST and infrared image fusion method, comprises the following steps:
Step 1, input visible ray and infrared image carry out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient;
Step 2, according to the low frequency sub-band coefficient of visible images and infrared image, adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, realizes as follows,
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle,
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represents visible images, the infrared image neighborhood averaging energy at pixel (m, n) place respectively, and X × Y is default Size of Neighborhood, any point in the neighborhood that (m+x, n+y) is pixel (m, n);
To visible images and infrared image, in the same way the neighborhood averaging energy of the low frequency sub-band coefficient of pixels all in image is formed neighboring region energy matrix respectively, the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion; Then, according to the region energy eigenwert of seeing all subregion in light image and infrared image, calculate the low frequency sub-band coefficient of respective sub-areas in fused images by the weighted strategy of region energy eigenwert, realize as follows,
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, be defined as follows,
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i)
Wherein, E ' vis, i, E ' inf, irepresent the region energy eigenwert of visible images and infrared image i-th sub regions respectively;
Step 3, according to the high-frequency sub-band coefficient of visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, realizes as follows,
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows,
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, the account form of weighting coefficient R is
Otherwise, then
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, th is predetermined threshold value;
Step 4, according to low frequency sub-band coefficient and the high-frequency sub-band coefficient of step 2,3 gained fused images, carries out NSST inverse transformation, obtains fused images.
And, in step 2, the region energy eigenwert E ' of visible images and infrared image i-th sub regions vis, i, E ' inf, iask for as follows,
If the center of the i-th sub regions is certain pixel (m, n) in image, be defined as follows,
E vis , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E vis ( m + a , n + b )
E inf , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E inf ( m + a , n + b )
Wherein, neighborhood averaging ENERGY E vis(m+a, n+b), E inf(m+a, n+b) represents visible images, the infrared image neighborhood averaging energy at pixel (m+a, n+b) place respectively, and M × N is default subregion size, and (m+a, n+b) is any point in the i-th sub regions.
And in step 3, the account form of quadravalence related coefficient F is as follows,
F = 1 M ′ × N ′ × Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 2 ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 2 ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 4 ) ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 4 )
Wherein, represent the high-frequency sub-band coefficient of pixel in infrared image and visible images (m+m ', n+n ') respectively, μ inf, μ visrepresent the average of high-frequency sub-band coefficient at moving window of infrared image and visible images respectively, M ' × N ' is the moving window size preset, (m+m ', n+n ') is any point in the moving window centered by pixel (m, n).
The present invention is also corresponding provides a kind of visible ray based on NSST and infrared image emerging system, comprises with lower module:
NSST conversion module, for inputting visible ray and infrared image and carrying out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient;
Low frequency sub-band coefficient Fusion Module, for the low frequency sub-band coefficient according to visible images and infrared image, adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, realizes as follows,
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle,
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represents visible images, the infrared image neighborhood averaging energy at pixel (m, n) place respectively, and X × Y is default Size of Neighborhood, any point in the neighborhood that (m+x, n+y) is pixel (m, n);
To visible images and infrared image, in the same way the neighborhood averaging energy of the low frequency sub-band coefficient of pixels all in image is formed neighboring region energy matrix respectively, the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion; Then, according to the region energy eigenwert of seeing all subregion in light image and infrared image, calculate the low frequency sub-band coefficient of respective sub-areas in fused images by the weighted strategy of region energy eigenwert, realize as follows,
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, be defined as follows,
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i)
Wherein, E ' vis, i, E ' inf, irepresent the region energy eigenwert of visible images and infrared image i-th sub regions respectively;
High-frequency sub-band coefficient Fusion Module, for the high-frequency sub-band coefficient according to visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, realizes as follows,
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows,
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, the account form of weighting coefficient R is
Otherwise, then
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, th is predetermined threshold value;
NSST inverse transform block, for according to the low frequency sub-band coefficient of low frequency sub-band coefficient Fusion Module and high-frequency sub-band coefficient Fusion Module gained fused images and high-frequency sub-band coefficient, carries out NSST inverse transformation, obtains fused images.
And, in low frequency sub-band coefficient Fusion Module, the region energy eigenwert E ' of visible images and infrared image i-th sub regions vis, i, E ' inf, iask for as follows,
If the center of the i-th sub regions is certain pixel (m, n) in image, be defined as follows,
E vis , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E vis ( m + a , n + b )
E inf , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E inf ( m + a , n + b )
Wherein, neighborhood averaging ENERGY E vis(m+a, n+b), E inf(m+a, n+b) represents visible images, the infrared image neighborhood averaging energy at pixel (m+a, n+b) place respectively, and M × N is default subregion size, and (m+a, n+b) is any point in the i-th sub regions.
And in high-frequency sub-band coefficient Fusion Module, the account form of quadravalence related coefficient F is as follows,
F = 1 M ′ × N ′ × Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 2 ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 2 ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 4 ) ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 4 )
Wherein, represent the high-frequency sub-band coefficient of pixel in infrared image and visible images (m+m ', n+n ') respectively, μ inf, μ visrepresent the average of high-frequency sub-band coefficient at moving window of infrared image and visible images respectively, M ' × N ' is the moving window size preset, (m+m ', n+n ') is any point in the moving window centered by pixel (m, n).
The beneficial effect of technical scheme provided by the invention is: for the imaging characteristics of infrared image and visible images, the selection scheme based on the region energy eigenwert weighting coefficient of fused images own physical characteristic is adopted herein in low frequency part sub-band coefficients, high-frequency sub-band coefficient considers the change of high fdrequency component, introduce region quadravalence correlation coefficient matching method degree, while reducing the blur level of fused images, remain the detailed information such as image border and texture better, effectively combine the target information of infrared image and the detailed information of visible images.Application the present invention can the useful information of comprehensive original image effectively, and subjective vision effect and objective evaluation index all have good syncretizing effect.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention process flow diagram.
Embodiment
In image interfusion method, the selection of convergence strategy is most important.Based on the convergence strategy of pixel, think between pixel and pixel it is separate, do not consider influencing each other of pixel and its field pixel, fused images easily loses some useful informations, particularly when image to be fused " matching degree " is not high, these class methods do not consider the change of high fdrequency component simultaneously; Convergence strategy based on region ignores the change of picture content, and the fused images obtained has blur level to a certain degree; Due to the difference of the physical characteristics of infrared and visible images, its gray-scale watermark differs greatly, and even polarity is completely contrary, and convergence strategy must consider these physical characteristicss.The present invention proposes a kind of fusion method converting (NSST) based on non-lower sampling Shearlet, considers the infrared physical characteristics different with visible images, adopt the convergence strategy based on the weighting of region energy eigenwert at low frequency sub-band coefficient; At high-frequency sub-band coefficient, consider infrared and difference that is visible images gray-scale watermark, easily there is " negative value " phenomenon in coefficient of region matching degree, introduce matching degree that is infrared in quadravalence related coefficient F zoning and visible images high frequency coefficient, and decide the high frequency coefficient of fused images accordingly.
Technical scheme for a better understanding of the present invention, below in conjunction with drawings and Examples, the present invention is described in further detail.
Technical scheme of the present invention can adopt computer software technology to realize automatically running.Embodiments of the invention merge a secondary visible images and a secondary infrared image.With reference to Fig. 1, visible images is expressed as VI, and infrared image is expressed as IR, and fused images is denoted as FU; The step of the embodiment of the present invention is as follows:
Step 1, input visible images and infrared image carry out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient.
Input the result that visible images to be fused and infrared image are registrations, the pixel one_to_one corresponding of visible images and infrared image.First embodiment extracts high-frequency sub-band coefficient (VI), high-frequency sub-band coefficient (IR), low frequency sub-band coefficient (VI), low frequency sub-band coefficient (IR).By carrying out NSST conversion to original image, the low-and high-frequency sub-band coefficients of a series of different scale different directions can be obtained.Concrete NSST is transformed to prior art, and it will not go into details in the present invention.
Step 2, low frequency sub-band coefficient according to visible images and infrared image extracts region energy eigenwert, calculate the low frequency sub-band coefficient of fused images: embodiment adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, as the low frequency sub-band coefficient FU in Fig. 1 in step 2.Low frequency sub-band coefficient reflects the energy distribution of image, i.e. the approximation characteristic of source images, and it occupies source images major part energy information.Traditional average weighted have ignored the correlativity in local neighborhood between pixel, fused images contrast can be reduced, the information that loss part is useful, the Weighted Fusion strategy based on region energy eigenwert effectively reduces the information redundancy of visible images detailed information loss and infrared image.
Step 2 specific implementation of embodiment is as follows:
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle:
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2 - - - ( 1 )
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2 - - - ( 2 )
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represent that visible images, infrared image are at pixel (m respectively, n) the neighborhood averaging energy at place, X × Y is default Size of Neighborhood, during concrete enforcement, those skilled in the art can sets itself value, gets 3 × 3, (m+x in the present embodiment, n+y) be any point in the neighborhood of pixel (m, n);
To visible images and infrared image, calculate the region energy eigenwert of all subregion respectively in the same way: in piece image, the neighborhood averaging energy of the low frequency sub-band coefficient of all pixels forms neighboring region energy matrix, then the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion.Identical with the sub-zone dividing mode of infrared image to visible images, therefore also there is respective sub-areas in fused images, can consistent numbering be carried out, make the i-th sub regions of visible images and infrared image corresponding to the i-th sub regions of fused images.According to the region energy eigenwert of seeing all subregion in light image and infrared image, the low frequency sub-band coefficient calculating respective sub-areas in fused images by the weighted strategy of region energy eigenwert is as follows:
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n ) - - - ( 3 )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, it is defined as follows:
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i) (4)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i) (5)
Wherein, E ' vis, i, E ' inf, irepresent respectively visible images and infrared image i-th sub regions carry out neighboring region energy compartmentalization after region energy eigenwert, the embodiment of the present invention further provides the mode of asking for:
If the center of the i-th sub regions is certain pixel (m, n) in image, it is defined as follows,
E vis , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E vis ( m + a , n + b ) - - - ( 6 )
E inf , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E inf ( m + a , n + b ) - - - ( 7 )
Wherein, neighborhood averaging ENERGY E vis(m+a, n+b), E inf(m+a, n+b) represent that visible images, infrared image are at pixel (m+a respectively, n+b) the neighborhood averaging energy at place, M × N is default subregion size, during concrete enforcement, those skilled in the art can sets itself value, get 3 × 3 in the present embodiment, (m+a, n+b) is any point in the i-th sub regions.
Step 3, according to the high-frequency sub-band coefficient of visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images.
Embodiment adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, as Fig. 1 medium-high frequency sub-band coefficients (FU) in step 3.High frequency coefficient reflects the minutia distribution of image, comprises abundant texture, profile information.Due to infrared and difference that is visible images gray-scale watermark, easily there is " negative value " phenomenon in coefficient of region matching degree, therefore the present invention introduces matching degree that is infrared in quadravalence related coefficient F zoning and visible images high frequency coefficient, and selects the high frequency coefficient of fused images accordingly.Concrete mode is as follows:
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows:
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) | - - - ( 8 )
Wherein, the account form of weighting coefficient R is:
Otherwise, then:
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) | - - - ( 9 )
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, the center that pixel (m, n) is moving window; Th is predetermined threshold value, and during concrete enforcement, those skilled in the art can sets itself value.
The computing method that the embodiment of the present invention further provides quadravalence related coefficient F are:
F = 1 M ′ × N ′ × Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 2 ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 2 ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 4 ) ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 4 ) - - - ( 10 )
Wherein, represent the high-frequency sub-band coefficient of pixel in infrared image and visible images (m+m ', n+n ') respectively, μ inf, μ visrepresent the average of high-frequency sub-band coefficient at moving window of infrared image and visible images respectively, M ' herein × N ' is the moving window size preset, during concrete enforcement, those skilled in the art can sets itself value, and the window M ' × N ' selected by embodiment is 3 × 3; (m+m ', n+n ') is any point in the moving window centered by pixel (m, n).
Step 4, low frequency sub-band coefficient and the high-frequency sub-band coefficient of the fused images obtained according to step 2,3 carry out NSST inverse transformation, obtain fused images.Concrete NSST contravariant is changed to prior art, and it will not go into details in the present invention.
The present invention is also corresponding provides a kind of visible ray based on NSST and infrared image emerging system, comprises with lower module:
NSST conversion module, for inputting visible ray and infrared image and carrying out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient;
Low frequency sub-band coefficient Fusion Module, for the low frequency sub-band coefficient according to visible images and infrared image, adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, realizes as follows,
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle,
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represents visible images, the infrared image neighborhood averaging energy at pixel (m, n) place respectively, and X × Y is default Size of Neighborhood, any point in the neighborhood that (m+x, n+y) is pixel (m, n);
To visible images and infrared image, in the same way the neighborhood averaging energy of the low frequency sub-band coefficient of pixels all in image is formed neighboring region energy matrix respectively, the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion; Then, according to the region energy eigenwert of seeing all subregion in light image and infrared image, calculate the low frequency sub-band coefficient of respective sub-areas in fused images by the weighted strategy of region energy eigenwert, realize as follows,
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, be defined as follows,
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i)
Wherein, E ' vis, i, E ' inf, irepresent the region energy eigenwert of visible images and infrared image i-th sub regions respectively;
High-frequency sub-band coefficient Fusion Module, for the high-frequency sub-band coefficient according to visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, realizes as follows,
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows,
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, the account form of weighting coefficient R is
Otherwise, then
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, th is predetermined threshold value;
NSST inverse transform block, for according to the low frequency sub-band coefficient of low frequency sub-band coefficient Fusion Module and high-frequency sub-band coefficient Fusion Module gained fused images and high-frequency sub-band coefficient, carries out NSST inverse transformation, obtains fused images.
Each module specific implementation is corresponding with each step, and it will not go into details in the present invention.
The present invention is more known with traditional images fusion method, no matter be from objective evaluation index, or on subjective vision, method of the present invention all has obvious advantage, target information can be remained better, also considerably improve the quantity of information of image, be a kind of effective image interfusion method simultaneously.

Claims (6)

1., based on visible ray and an infrared image fusion method of NSST, it is characterized in that, comprise the following steps:
Step 1, input visible ray and infrared image carry out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient;
Step 2, according to the low frequency sub-band coefficient of visible images and infrared image, adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, realizes as follows,
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle,
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represents visible images, the infrared image neighborhood averaging energy at pixel (m, n) place respectively, and X × Y is default Size of Neighborhood, any point in the neighborhood that (m+x, n+y) is pixel (m, n);
To visible images and infrared image, in the same way the neighborhood averaging energy of the low frequency sub-band coefficient of pixels all in image is formed neighboring region energy matrix respectively, the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion; Then, according to the region energy eigenwert of seeing all subregion in light image and infrared image, calculate the low frequency sub-band coefficient of respective sub-areas in fused images by the weighted strategy of region energy eigenwert, realize as follows,
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, be defined as follows,
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i)
Wherein, E ' vis, i, E ' inf, irepresent the region energy eigenwert of visible images and infrared image i-th sub regions respectively;
Step 3, according to the high-frequency sub-band coefficient of visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, realizes as follows,
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows,
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, the account form of weighting coefficient R is
Otherwise, then
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, th is predetermined threshold value;
Step 4, according to low frequency sub-band coefficient and the high-frequency sub-band coefficient of step 2,3 gained fused images, carries out NSST inverse transformation, obtains fused images.
2. the visible ray based on NSST according to claim 1 and infrared image fusion method, is characterized in that: in step 2, the region energy eigenwert E ' of visible images and infrared image i-th sub regions vis, i, E ' inf, iask for as follows,
If the center of the i-th sub regions is certain pixel (m, n) in image, be defined as follows,
E vis , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E vis ( m + a , n + b )
E inf , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E inf ( m + a , n + b )
Wherein, neighborhood averaging ENERGY E vis(m+a, n+b), E inf(m+a, n+b) represents visible images, the infrared image neighborhood averaging energy at pixel (m+a, n+b) place respectively, and M × N is default subregion size, and (m+a, n+b) is any point in the i-th sub regions.
3. the visible ray based on NSST according to claim 1 and 2 and infrared image fusion method, is characterized in that: in step 3, and the account form of quadravalence related coefficient F is as follows,
F = 1 M ′ × N ′ × Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 2 ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 2 ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 4 ) ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 4 )
Wherein, represent the high-frequency sub-band coefficient of pixel in infrared image and visible images (m+m ', n+n ') respectively, μ inf, μ visrepresent the average of high-frequency sub-band coefficient at moving window of infrared image and visible images respectively, M ' × N ' is the moving window size preset, (m+m ', n+n ') is any point in the moving window centered by pixel (m, n).
4., based on visible ray and an infrared image emerging system of NSST, it is characterized in that, comprise with lower module:
NSST conversion module, for inputting visible ray and infrared image and carrying out NSST conversion, obtain the sub-band coefficients of visible images and infrared image respectively, described sub-band coefficients comprises low frequency sub-band coefficient and high-frequency sub-band coefficient;
Low frequency sub-band coefficient Fusion Module, for the low frequency sub-band coefficient according to visible images and infrared image, adopts the weighted strategy based on region energy eigenwert to calculate the low frequency sub-band coefficient of fused images, realizes as follows,
To visible images and each pixel of infrared image, ask the neighborhood averaging energy of low frequency sub-band coefficient respectively according to following principle,
E vis ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C vis l ( m + x , n + y ) ] 2
E inf ( m , n ) = 1 X × Y Σ x = ( X - 1 ) / 2 ( X + 1 ) / 2 Σ y = - ( Y - 1 ) / 2 ( Y + 1 ) / 2 [ C inf l ( m + x , n + y ) ] 2
Wherein, represent the low frequency sub-band coefficient of pixel (m+x, n+y) in infrared image, visible images respectively; E vis(m, n), E inf(m, n) represents visible images, the infrared image neighborhood averaging energy at pixel (m, n) place respectively, and X × Y is default Size of Neighborhood, any point in the neighborhood that (m+x, n+y) is pixel (m, n);
To visible images and infrared image, in the same way the neighborhood averaging energy of the low frequency sub-band coefficient of pixels all in image is formed neighboring region energy matrix respectively, the corresponding neighboring region energy matrix trace inequality obtained is become the independent subregion of several non-overlapping copies, and calculate the region energy eigenwert of all subregion; Then, according to the region energy eigenwert of seeing all subregion in light image and infrared image, calculate the low frequency sub-band coefficient of respective sub-areas in fused images by the weighted strategy of region energy eigenwert, realize as follows,
If certain pixel (m, n) belongs to the i-th sub regions in image, ask for the low frequency sub-band coefficient of this pixel in fused images according to the blending weight of the i-th sub regions it is as follows,
C fus , i l ( m , n ) = w vis , i × C vis l ( m , n ) + w inf , i × C inf l ( m , n )
Wherein, represent the low frequency sub-band coefficient of pixel (m, n) in infrared image, visible images respectively; w inf, i, w vis, irepresent the blending weight of infrared image, visible images i-th sub regions respectively, be defined as follows,
w vis,i=E′ vis,i/(E′ vis,i+E′ inf,i)
w inf,i=E′ inf,i/(E′ vis,i+E′ inf,i)
Wherein, E ' vis, i, E ' inf, irepresent the region energy eigenwert of visible images and infrared image i-th sub regions respectively;
High-frequency sub-band coefficient Fusion Module, for the high-frequency sub-band coefficient according to visible images and infrared image, adopts the matching strategy based on quadravalence related coefficient to calculate the high-frequency sub-band coefficient of fused images, realizes as follows,
A moving window is set, the quadravalence related coefficient F of high-frequency sub-band coefficient in moving window of visible images and infrared image is calculated when moving window traverses any position, if window center when moving window traverses any position is image pixel (m, n)
As F > th, the high-frequency sub-band coefficient of fused images is asked for as follows,
C fus h ( m , n ) = ( 1 - R ) · C inf h ( m , n ) + R · C vis h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | ( 1 - R ) · C vis h ( m , n ) + R · C inf h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, the account form of weighting coefficient R is
Otherwise, then
C fus h ( m , n ) = C inf h ( m , n ) , | C inf h ( m , n ) | > | C vis h ( m , n ) | C vis h ( m , n ) , | C inf h ( m , n ) | ≤ | C vis h ( m , n ) |
Wherein, with represent infrared image, visible images and the fused images high-frequency sub-band coefficient at pixel (m, n) respectively, th is predetermined threshold value;
NSST inverse transform block, for according to the low frequency sub-band coefficient of low frequency sub-band coefficient Fusion Module and high-frequency sub-band coefficient Fusion Module gained fused images and high-frequency sub-band coefficient, carries out NSST inverse transformation, obtains fused images.
5. the visible ray based on NSST according to claim 1 and infrared image fusion method, is characterized in that: in low frequency sub-band coefficient Fusion Module, the region energy eigenwert E ' of visible images and infrared image i-th sub regions vis, i, E ' inf, iask for as follows,
If the center of the i-th sub regions is certain pixel (m, n) in image, be defined as follows,
E vis , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E vis ( m + a , n + b )
E inf , i ′ = 1 M × N Σ a = - ( M - 1 ) / 2 ( M + 1 ) / 2 Σ b = - ( N - 1 ) / 2 ( N + 1 ) / 2 E inf ( m + a , n + b )
Wherein, neighborhood averaging ENERGY E vis(m+a, n+b), E inf(m+a, n+b) represents visible images, the infrared image neighborhood averaging energy at pixel (m+a, n+b) place respectively, and M × N is default subregion size, and (m+a, n+b) is any point in the i-th sub regions.
6. the visible ray based on NSST according to claim 1 and 2 and infrared image fusion method, is characterized in that: in high-frequency sub-band coefficient Fusion Module, the account form of quadravalence related coefficient F is as follows,
F = 1 M ′ × N ′ × Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 2 ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 2 ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C inf h ( m + m ′ , n + n ′ ) - μ inf ) 4 ) ( Σ m ′ = 1 M ′ Σ n ′ = 1 N ′ ( C vis h ( m + m ′ , n + n ′ ) - μ vis ) 4 )
Wherein, represent the high-frequency sub-band coefficient of pixel in infrared image and visible images (m+m ', n+n ') respectively, μ inf, μ visrepresent the average of high-frequency sub-band coefficient at moving window of infrared image and visible images respectively, M ' × N ' is the moving window size preset, (m+m ', n+n ') is any point in the moving window centered by pixel (m, n).
CN201410849724.3A 2014-12-30 2014-12-30 Visible light and infrared images fusion method based on NSST and system thereof Pending CN104504673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410849724.3A CN104504673A (en) 2014-12-30 2014-12-30 Visible light and infrared images fusion method based on NSST and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410849724.3A CN104504673A (en) 2014-12-30 2014-12-30 Visible light and infrared images fusion method based on NSST and system thereof

Publications (1)

Publication Number Publication Date
CN104504673A true CN104504673A (en) 2015-04-08

Family

ID=52946067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410849724.3A Pending CN104504673A (en) 2014-12-30 2014-12-30 Visible light and infrared images fusion method based on NSST and system thereof

Country Status (1)

Country Link
CN (1) CN104504673A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN106846288A (en) * 2017-01-17 2017-06-13 中北大学 A kind of many algorithm fusion methods of bimodal infrared image difference characteristic Index
CN106897986A (en) * 2017-01-23 2017-06-27 浙江大学 A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN106997060A (en) * 2017-06-14 2017-08-01 中国石油大学(华东) A kind of seismic multi-attribute fusion method based on Shearlet fastICA
CN107657217A (en) * 2017-09-12 2018-02-02 电子科技大学 The fusion method of infrared and visible light video based on moving object detection
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
CN109345495A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on energy minimum and gradient regularisation
CN110443111A (en) * 2019-06-13 2019-11-12 东风柳州汽车有限公司 Automatic Pilot target identification method
CN112233053A (en) * 2020-09-23 2021-01-15 浙江大华技术股份有限公司 Image fusion method, device, equipment and computer readable storage medium
CN112508829A (en) * 2020-06-30 2021-03-16 南京理工大学 Pan-sharpening method based on shear wave transformation
CN113628151A (en) * 2021-08-06 2021-11-09 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN113822833A (en) * 2021-09-26 2021-12-21 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN115147325A (en) * 2022-09-05 2022-10-04 深圳清瑞博源智能科技有限公司 Image fusion method, device, equipment and storage medium
CN113628151B (en) * 2021-08-06 2024-04-26 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339653A (en) * 2008-01-30 2009-01-07 西安电子科技大学 Infrared and colorful visual light image fusion method based on color transfer and entropy information
CN102637297A (en) * 2012-03-21 2012-08-15 武汉大学 Visible light and infrared image fusion method based on Curvelet transformation
CN103530862A (en) * 2013-10-30 2014-01-22 重庆邮电大学 Infrared and low-level-light image fusion method based on NSCT (nonsubsampled contourlet transform) neighborhood characteristic regionalization
US8755597B1 (en) * 2011-02-24 2014-06-17 Exelis, Inc. Smart fusion of visible and infrared image data
CN104200452A (en) * 2014-09-05 2014-12-10 西安电子科技大学 Method and device for fusing infrared and visible light images based on spectral wavelet transformation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339653A (en) * 2008-01-30 2009-01-07 西安电子科技大学 Infrared and colorful visual light image fusion method based on color transfer and entropy information
US8755597B1 (en) * 2011-02-24 2014-06-17 Exelis, Inc. Smart fusion of visible and infrared image data
CN102637297A (en) * 2012-03-21 2012-08-15 武汉大学 Visible light and infrared image fusion method based on Curvelet transformation
CN103530862A (en) * 2013-10-30 2014-01-22 重庆邮电大学 Infrared and low-level-light image fusion method based on NSCT (nonsubsampled contourlet transform) neighborhood characteristic regionalization
CN104200452A (en) * 2014-09-05 2014-12-10 西安电子科技大学 Method and device for fusing infrared and visible light images based on spectral wavelet transformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨如红,邵振峰,张磊: "基于四阶相关系数的NSCT域红外与可见光图像融合", 《激光与红外》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105069768B (en) * 2015-08-05 2017-12-29 武汉高德红外股份有限公司 A kind of visible images and infrared image fusion processing system and fusion method
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN106846288A (en) * 2017-01-17 2017-06-13 中北大学 A kind of many algorithm fusion methods of bimodal infrared image difference characteristic Index
CN106846288B (en) * 2017-01-17 2019-09-06 中北大学 A kind of more algorithm fusion methods of bimodal infrared image difference characteristic Index
CN106897986B (en) * 2017-01-23 2019-08-20 浙江大学 A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN106897986A (en) * 2017-01-23 2017-06-27 浙江大学 A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN106997060A (en) * 2017-06-14 2017-08-01 中国石油大学(华东) A kind of seismic multi-attribute fusion method based on Shearlet fastICA
CN107657217A (en) * 2017-09-12 2018-02-02 电子科技大学 The fusion method of infrared and visible light video based on moving object detection
CN107657217B (en) * 2017-09-12 2021-04-30 电子科技大学 Infrared and visible light video fusion method based on moving target detection
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
CN109035189B (en) * 2018-07-17 2021-07-23 桂林电子科技大学 Infrared and weak visible light image fusion method based on Cauchy fuzzy function
CN109345495A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on energy minimum and gradient regularisation
CN109345495B (en) * 2018-09-11 2021-06-15 中国科学院长春光学精密机械与物理研究所 Image fusion method and device based on energy minimization and gradient regularization
CN110443111A (en) * 2019-06-13 2019-11-12 东风柳州汽车有限公司 Automatic Pilot target identification method
CN112508829A (en) * 2020-06-30 2021-03-16 南京理工大学 Pan-sharpening method based on shear wave transformation
CN112508829B (en) * 2020-06-30 2022-09-27 南京理工大学 Pan-sharpening method based on shear wave transformation
CN113947554A (en) * 2020-07-17 2022-01-18 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN113947554B (en) * 2020-07-17 2023-07-14 四川大学 Multi-focus image fusion method based on NSST and significant information extraction
CN112233053A (en) * 2020-09-23 2021-01-15 浙江大华技术股份有限公司 Image fusion method, device, equipment and computer readable storage medium
CN113628151A (en) * 2021-08-06 2021-11-09 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN113628151B (en) * 2021-08-06 2024-04-26 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN113822833A (en) * 2021-09-26 2021-12-21 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN113822833B (en) * 2021-09-26 2024-01-16 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN115147325A (en) * 2022-09-05 2022-10-04 深圳清瑞博源智能科技有限公司 Image fusion method, device, equipment and storage medium
CN115147325B (en) * 2022-09-05 2022-11-22 深圳清瑞博源智能科技有限公司 Image fusion method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104504673A (en) Visible light and infrared images fusion method based on NSST and system thereof
US20220044375A1 (en) Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method
CN103020920B (en) Method for enhancing low-illumination images
Luan et al. Fast single image dehazing based on a regression model
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
Meng et al. From night to day: GANs based low quality image enhancement
CN109447917B (en) Remote sensing image haze eliminating method based on content, characteristics and multi-scale model
CN102063713A (en) Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103971329A (en) Cellular nerve network with genetic algorithm (GACNN)-based multisource image fusion method
CN111539246B (en) Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
Xie et al. A binocular vision application in IoT: Realtime trustworthy road condition detection system in passable area
CN102096913B (en) Multi-strategy image fusion method under compressed sensing framework
CN103295010A (en) Illumination normalization method for processing face images
CN103985104B (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference
CN112767286A (en) Dark light image self-adaptive enhancement method based on intensive deep learning
WO2023212997A1 (en) Knowledge distillation based neural network training method, device, and storage medium
CN102999890A (en) Method for dynamically calibrating image light intensity on basis of environmental factors
CN110148083B (en) Image fusion method based on rapid BEMD and deep learning
Cai et al. Perception preserving decolorization
CN112686913B (en) Object boundary detection and object segmentation model based on boundary attention consistency
Liu et al. Toward visual quality enhancement of dehazing effect with improved Cycle-GAN
CN113239749B (en) Cross-domain point cloud semantic segmentation method based on multi-modal joint learning
Zhao et al. Color channel fusion network for low-light image enhancement
CN105528772B (en) A kind of image interfusion method based on directiveness filtering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150408

RJ01 Rejection of invention patent application after publication