CN109064436A - Image interfusion method - Google Patents
Image interfusion method Download PDFInfo
- Publication number
- CN109064436A CN109064436A CN201810751977.5A CN201810751977A CN109064436A CN 109064436 A CN109064436 A CN 109064436A CN 201810751977 A CN201810751977 A CN 201810751977A CN 109064436 A CN109064436 A CN 109064436A
- Authority
- CN
- China
- Prior art keywords
- high frequency
- image
- low frequency
- scene
- subgraph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Abstract
The present invention provides a kind of image interfusion methods, this method comprises: obtaining the visible images and infrared light image of Same Scene;It decomposes visible images and obtains the first high frequency subgraph and the first low frequency subgraph picture, and decompose infrared light image and obtain the second high frequency subgraph and the second low frequency subgraph picture;The high frequency blending image of scene is generated according to the first high frequency subgraph and the second high frequency subgraph, and the low frequency blending image of scene is generated according to the first low frequency subgraph picture and the second low frequency subgraph picture;The blending image of scene is calculated according to the low frequency blending image of scene and high frequency blending image.This method has not only been sufficiently reserved brightness and the contrast information of image in the application, also highlighted details (edge, texture etc.) information in image, the interpretation capability of image is greatly improved.
Description
Technical field
The present invention relates to field of image processings, more particularly to a kind of image interfusion method.
Background technique
In aerospace studies, it can be obtained using the re-entry space vehicle of different height sensor mounted abundant
Earth observation data and air scout data realize environmental monitoring, terrain classification identification, geologic prospect, petroleum detection and vegetation
The multiple uses such as analysis.In order to make full use of the information of multi-sensor collection, the office of single-sensor information obtained is made up
It is sex-limited, different application demands is adapted to, image fusion technology is come into being.Image co-registration is by multichannel same mesh collected
The complementary information of logo image merges, and so that fused image is provided simultaneously with the information of image to be fused, with more accurate reaction
Actual information.
Currently, existing fusion method mainly counts band blending image using matrix operation and statistical estimate theory
It calculates, realizes message complementary sense.More classical method has: Weighted Fusion method, and pixel value takes big method, and pixel value takes small method, it is main at
Divide analytic approach and Statistical Estimation Method etc..But above-mentioned pixel level fusing method its picture contrast after fusion is lower, simultaneously
It can not be effectively maintained, merge minutia in source images, thus such method is caused to be unable to satisfy answering for each field
Use demand.
Summary of the invention
The present invention provides a kind of image interfusion methods, show energy to scene minutia to promote blending image
Power, this method comprises:
Obtain the visible images and infrared light image of Same Scene;
It decomposes visible images and obtains the first high frequency subgraph and the first low frequency subgraph picture, and decompose infrared light image and obtain
Obtain the second high frequency subgraph and the second low frequency subgraph picture;
The high frequency blending image of scene is generated according to the first high frequency subgraph and the second high frequency subgraph, and according to first
Low frequency subgraph picture and the second low frequency subgraph picture generate the low frequency blending image of scene;
The blending image of scene is calculated according to the low frequency blending image of scene and high frequency blending image.
In specific implementation, the low frequency for generating scene according to the first low frequency subgraph picture and the second low frequency subgraph picture merges figure
Picture is calculated according to the following formula:
I=Al(row,col).*(Al(row, col) <=Bl(row,col))+Bl(row,col).*(Al(row,col)
> Bl(row,col));
Dl=Al+Bl-I;
Wherein, I indicates the redundancy of the low frequency blending image of scene;L indicates low frequency;DlIndicate the low frequency fusion of scene
Image;AlIndicate the first low frequency subgraph picture;BlIndicate the second low frequency subgraph picture;(row, col) indicates location of pixels.
In specific implementation, the high frequency for generating scene according to the first high frequency subgraph and the second high frequency subgraph merges figure
Picture, comprising:
First high frequency subgraph and the second high frequency subgraph are divided into multiple corresponding regions;
Calculate the Deng Shi degree of association coefficient of each corresponding region of the first high frequency subgraph and the second high frequency subgraph;
The first high frequency subgraph of each corresponding region and second high is merged according to the Deng Shi degree of association coefficient of each corresponding region
Frequency subgraph generates the high frequency blending image of each corresponding region;
The high frequency blending image of scene is calculated according to the high frequency blending image of each corresponding region.
In specific implementation, first high frequency that each corresponding region is merged according to the Deng Shi degree of association coefficient of each corresponding region
Subgraph and the second high frequency subgraph generate the high frequency blending image of each corresponding region, comprising:
It is respectively compared the Deng Shi degree of association coefficient and given threshold of each corresponding region;
If the Deng Shi degree of association coefficient of corresponding region is greater than given threshold, Deng Shi degree of association coefficient between corresponding region is made
For the high frequency blending image for weighting weight computing corresponding region;If Deng Shi degree of association coefficient is less than given threshold between corresponding region,
The high frequency blending image of big criterion calculating corresponding region is then taken according to region energy.
It, will be between corresponding region if the Deng Shi degree of association coefficient of the corresponding region is greater than given threshold in specific implementation
High frequency blending image of the Deng Shi degree of association coefficient as weighting weight computing corresponding region;If Deng Shi degree of association system between corresponding region
Number is less than given threshold, then the high frequency blending image of big criterion calculating corresponding region is taken according to region energy, according to the following formula
It is calculated:
Wherein, RDIndicate Deng Shi degree of association coefficient;Th ∈ (0,1) indicates given threshold;Indicate i-th layer of jth direction
The high frequency blending image of a upper corresponding region;I expression layer serial number;J indicates direction serial number;H indicates high frequency.
In specific implementation, the decomposition visible images obtain the first high frequency subgraph and the first low frequency subgraph picture, and
It decomposes infrared light image and obtains the second high frequency subgraph and the second low frequency subgraph picture, further comprise: visible images are carried out
Multilayer song Wave Decomposition obtains the first high frequency subgraph and the first low frequency subgraph picture, and carries out multilayer Qu Bo to infrared light image
It decomposes, obtains the second high frequency subgraph and the second low frequency subgraph picture.
In specific implementation, the fusion figure that scene is calculated according to the low frequency blending image and high frequency blending image of scene
Picture further comprises: low frequency blending image and high frequency blending image march wave transform operation to scene obtain scene
Blending image.
In specific implementation, when obtaining the visible images and infrared light image of Same Scene, to the visible images
Registration process is carried out with infrared light image.
The present invention also provides a kind of computer equipment, including memory, processor and storage are on a memory and can be
The computer program run on processor, the processor realize image interfusion method when executing the computer program.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has execution
The computer program of image interfusion method.
Image interfusion method of the invention, first visible images and infrared light image of the acquisition based on Same Scene, point
It is other that the infrared light image and visible images are decomposed, with obtain the first high frequency subgraph and the first low frequency subgraph picture and
Second high frequency subgraph and the second low frequency subgraph picture, then respectively to the first, second high frequency subgraph and the first, second low frequency
Subgraph is merged to obtain the low frequency blending image of scene and high frequency blending image, is finally melted the low frequency of scene, high frequency
Image is closed to be merged to obtain the blending image of scene.This method be not only sufficiently reserved in the application the brightness of image with it is right
Than degree information, it also highlighted details (edge, texture etc.) information in image, the interpretation capability of image be greatly improved.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below
Attached drawing is only certain specific embodiments of the invention, for those of ordinary skill in the art, is not paying creativeness
Under the premise of labour, it is also possible to obtain other drawings based on these drawings.In the accompanying drawings:
Fig. 1 is the flow diagram according to image interfusion method in the specific embodiment of the present invention;
Fig. 2 is to be illustrated according to the process for the high frequency blending image for calculating scene in the specific embodiment of the present invention
Figure;
Fig. 3 is the process according to the high frequency blending image for calculating each corresponding region in the specific embodiment of the present invention
Schematic diagram;
Fig. 4 is according to the image co-registration in the specific embodiment of the present invention based on warp wavelet and the Deng Shi degree of association
Flow diagram;
Fig. 5 is the effect contrast figure of various image interfusion methods in image co-registration according to the present invention experiment.
Specific embodiment
For the purposes, technical schemes and advantages of the specific embodiment of the invention are more clearly understood, with reference to the accompanying drawing
The specific embodiment of the invention is described in further details.Here, schematic specific embodiment of the invention and its explanation
It is used to explain the present invention, but not as a limitation of the invention.
As shown in Figure 1, the present invention provides a kind of image interfusion method, to promote blending image to scene minutia
Show ability, this method comprises:
101: obtaining the visible images and infrared light image of Same Scene;
102: decomposing visible images and obtain the first high frequency subgraph and the first low frequency subgraph picture, and decompose infrared light figure
As obtaining the second high frequency subgraph and the second low frequency subgraph picture;
103: according to the high frequency blending image of the first high frequency subgraph and the second high frequency subgraph generation scene, and according to
First low frequency subgraph picture and the second low frequency subgraph picture generate the low frequency blending image of scene;
104: the blending image of scene is calculated according to the low frequency blending image of scene and high frequency blending image.
In specific implementation, merges the first low frequency subgraph picture and the second low frequency subgraph picture can be there are many embodiment.For example,
Due to mainly including the information such as color, brightness in original image in low frequency subgraph picture, and visible images and infrared light image
Low frequency subgraph picture there is message complementary sense characteristic, in order to enable low frequency blending image is more accurate, thus in fusion low frequency subgraph
As when also need to remove the redundancy of lap.Specifically, firstly the need of the low frequency subgraph picture according to two width original images
The redundancy for calculating low frequency blending image, the low frequency subgraph picture of two width original images is then merged and removed again
The redundancy calculated can obtain the low frequency blending image of scene.It is thus described according to the first low frequency subgraph picture and the
Two low frequency subgraph pictures generate the low frequency blending image of scene, can be calculated according to the following formula:
I=Al(row,col).*(Al(row, col) <=Bl(row,col))+Bl(row,col).*(Al(row,col)
> Bl(row,col));
Dl=Al+Bl-I;
Wherein, I indicates the redundancy of the low frequency blending image of scene;L indicates low frequency;DlIndicate the low frequency fusion of scene
Image;AlIndicate the first low frequency subgraph picture;BlIndicate the second low frequency subgraph picture;(row, col) indicates location of pixels.
In specific implementation, the high frequency subgraph of the low frequency subgraph picture and infrared light image that merge visible images can have more
Kind embodiment.For example, since the Deng Shi degree of association is only with being handled for a small number of evidences, poor information, uncertain problem
Special advantage, and can be adapted for the data sequence of the formation of the small neighbourhood pixel in image.The Deng Shi degree of association can thus be introduced
After being decomposed into image co-registration to Curvelet (Qu Bo) in the processing of the small neighbourhood pixel sequence of high frequency imaging, to ensure neighborhood
Information is not lost.It can also be constructed in this, as measurement source images according to the definition of the Deng Shi degree of association through Curvelet (Qu Bo)
The local correlations of different frequency bands subgraph are corresponded to after decomposition, and make the foundation of fusion rule.Specifically, due to high frequency subgraph
It, can be by two width original in order to make full use of neighborhood information as representing the minutias such as edge, texture in two width original images
The high frequency subgraph of beginning image is divided into multiple regions, and the division of the two is identical, and the multiple regions divided can be with an a pair
It answers.Then the high frequency subgraph (i.e. the first high frequency subgraph) of visible images and high frequency of infrared light image are calculated separately again
The Deng Shi degree of association of the image (i.e. the second high frequency subgraph) in each corresponding region, and then each correspondence is merged according to the Deng Shi degree of association
The high frequency blending image in region, after the high frequency blending image for obtaining each corresponding region, it is only necessary to by the high frequency of each corresponding region
Blending image is spliced, and the blending image of scene can be obtained.Thus as shown in Fig. 2, described according to the first high frequency subgraph
Picture and the second high frequency subgraph generate the high frequency blending image of scene, may include steps of:
201: the first high frequency subgraph and the second high frequency subgraph are divided into multiple corresponding regions;
202: calculating the Deng Shi degree of association coefficient of each corresponding region of the first high frequency subgraph and the second high frequency subgraph;
203: the first high frequency subgraph and of each corresponding region is merged according to the Deng Shi degree of association coefficient of each corresponding region
Two high frequency subgraphs generate the high frequency blending image of each corresponding region;
204: the high frequency blending image of scene is calculated according to the high frequency blending image of each corresponding region.
In specific implementation, division multiple regions can there are many real in the first high frequency subgraph and the second high frequency subgraph
Apply scheme.For example, in order to utilize neighborhood information as far as possible while guaranteeing computational efficiency, each corresponding region can be 4
The region of × 4 resolution sizes.
It, can be there are many embodiment party according to the high frequency blending image of each corresponding region of Deng Shi calculation of relationship degree in specific implementation
Case.For example, the Deng Shi degree of association of each corresponding region can be compared with given threshold, and then determined according to comparison result
The weighting weight of high frequency subgraph, thus, as shown in figure 3, the step 203: according to the Deng Shi degree of association system of each corresponding region
Number merges the first high frequency subgraph and the second high frequency subgraph of each corresponding region, generates the high frequency fusion figure of each corresponding region
Picture may include steps of:
301: being respectively compared the Deng Shi degree of association coefficient and given threshold of each corresponding region;
302: if the Deng Shi degree of association coefficient of corresponding region is greater than given threshold, by Deng Shi degree of association system between corresponding region
High frequency blending image of the number as weighting weight computing corresponding region;If Deng Shi degree of association coefficient is less than setting threshold between corresponding region
Value then takes the high frequency blending image of big criterion calculating corresponding region according to region energy.
Specifically, illustrating in the corresponding region if the Deng Shi degree of association coefficient of a corresponding region is greater than given threshold
The high frequency subgraph correlation of two images is high, can be to using Deng Shi degree of association coefficient as weighting weight;If a corresponding area
In domain the Deng Shi degree of association coefficient of two images be less than threshold value, it may be considered that in the corresponding region two images high frequency subgraph
As needing to continue to compare the region energy value of two images, that is, region energy being taken to take big fusion rule there are larger difference.
In specific implementation, the step 302:, will be right if the Deng Shi degree of association coefficient of corresponding region is greater than given threshold
Answer interregional Deng Shi degree of association coefficient as the high frequency blending image of weighting weight computing corresponding region;If Deng Shi between corresponding region
Degree of association coefficient is less than given threshold, then takes big criterion to calculate the high frequency blending image of corresponding region according to region energy, can be with
It is calculated according to the following formula:
Wherein, RDIndicate Deng Shi degree of association coefficient;Th ∈ (0,1) indicates given threshold;Indicate i-th layer of jth direction
The high frequency blending image of a upper corresponding region;I expression layer serial number;J indicates direction serial number;H indicates high frequency.
In specific implementation, decomposes visible images and infrared light image can be there are many embodiment.For example, the step
102: decomposing visible images and obtain the first high frequency subgraph and the first low frequency subgraph picture, and decompose infrared light image and obtain the
Two high frequency subgraphs and the second low frequency subgraph picture, may further include: carry out multilayer (n-layer, n ∈ R) to visible images
Curvelet (Qu Bo) is decomposed, and obtains the first high frequency subgraph and the first low frequency subgraph picture, and is carried out to infrared light image more
The bent Wave Decomposition of layer, obtains the second high frequency subgraph and the second low frequency subgraph picture.
In specific implementation, the blending image for calculating scene can be there are many embodiment.For example, the step 104: according to
The low frequency blending image and high frequency blending image of scene calculate the blending image of scene, may further include: to the low of scene
Frequency blending image and high frequency blending image march wave transform operation, obtain the blending image of scene.
It, can also be before fusion to visible images and infrared in order to enable blending image is relatively sharp in specific implementation
Light image is pre-processed.Carrying out pretreatment can be there are many embodiment, for example, in step 101: obtain Same Scene can
When light-exposed image and infrared light image, registration process can also be carried out to the visible images and infrared light image.
In specific implementation, as shown in figure 4, decomposing the image co-registration with Deng Shi degree of association coefficient based on Curvelet (Qu Bo)
Method specifically comprises the following steps: first respectively to carry out the visible images A the being registrated and infrared light image B being registrated
Curvelet (Qu Bo) is decomposed, and to obtain the high frequency subgraph and low frequency subgraph picture of visible images A, and obtains infrared light figure
As the high frequency subgraph and low frequency subgraph picture of B;Then merge visible images A low frequency subgraph picture and infrared light image B it is low
Frequency subgraph, and redundancy is removed, obtain the low frequency blending image of scene;And calculate visible images A high frequency subgraph and
The Deng Shi degree of association coefficient of the high frequency subgraph of infrared light image B, and Deng Shi degree of association coefficient is compared with given threshold,
When Deng Shi degree of association coefficient is greater than given threshold, the high frequency subgraph of visible images A is merged according to Deng Shi degree of association coefficient
Big rule is taken according to region energy when Deng Shi degree of association coefficient is less than given threshold with the high frequency subgraph of infrared light image B
The high frequency subgraph of visible images A and the high frequency subgraph of infrared light image B are merged, what this step obtained is the high frequency of scene
Blending image;Finally, the high frequency blending image and low frequency blending image to scene carry out Curvelet (Qu Bo) inverse transformation, obtain
The blending image C of scene.
In specific implementation, for the validity for verifying the method for the present invention, can also using MATLAB2014 as emulation tool,
It is used respectively Weighted Fusion algorithm (JQ) based on pretreated visible light and infrared light image is had been subjected to, principal component analysis fusion is calculated
Method (PCA), Wavelet Fusion algorithm (WT) and the method for the present invention carry out image co-registration emulation experiment.Wherein Weighted Fusion algorithm is weighed
Value can take 0.5;Wavelet basis can use sym 4, hierarchy number 3 in Wavelet Fusion algorithm;Using provided by the invention
When fusion method, Qu Bo (Curvelet), which decomposes original image, can constantly use wrap method, and Decomposition order is 6 layers,
When comparing Deng Shi degree of association coefficient and given threshold, given threshold th can take 0.7.
It is 512 × 512 visible images and infrared light image that experiment, which uses resolution ratio, is calculated respectively above-mentioned each fusion
The syncretizing effect of method and fusion method proposed by the present invention compares in subjectiveness and objectiveness index, experiment gained fusion results
As shown in Figure 5.
From improvement of visual effect analyze, the visible personage in infrared light image, house, tree etc. and visible images can not
See, above-mentioned target is presented in fusion results.Average weighted syncretizing effect is worst, and image is fuzzy, and details embodies most
Difference;Brightness of the image of principal component analysis blending algorithm in brightness than small echo and new algorithm is all low, and details is unobvious, but compared with
Weighted Average Algorithm is promoted;The syncretizing effect subjective effect of wavelet algorithm and new algorithm is not much different.On the other hand real
Four kinds of indexs that objectively evaluate including entropy (E), mean value (M), standard deviation (STD) and average gradient (AG) are used in testing, it is right
The syncretizing effect of above-mentioned different method carries out quantitative analysis, analysis result such as table 1 (various fusion method evaluation index comparisons)
It is shown.
Table 1
Standard deviation in table 1 is bigger, then target image gray level difference is more, and the detailed information that image embodies is more;?
It is worth the major embodiment brightness problem of image, mean value is bigger, and image is brighter;The more big then image of entropy includes that information is abundanter;
Average gradient is bigger, and image is more clear.By the comparison of table 1: method provided by the invention is compared with other three kinds of basic skills
No matter in mean value or entropy, standard deviation has a clear superiority in terms of average gradient, it is clear that method provided by the invention is being promoted
Blending image details, brightness, clarity and abundant information degree etc. effect are obvious.
The present invention also provides a kind of computer equipment, including memory, processor and storage are on a memory and can be
The computer program run on processor, the processor realize image interfusion method when executing the computer program.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has execution
The computer program of image interfusion method.
In conclusion image interfusion method of the invention, visible images based on Same Scene and infrared are obtained first
The infrared light image and visible images are decomposed to obtain high frequency, the low frequency of visible images and infrared light image in light image
Then subgraph merges the high frequency of visible images and infrared light image, low frequency subgraph picture to obtain the low frequency of scene and merge
Image and high frequency blending image finally merge low frequency, high frequency blending image to obtain the blending image of scene.This method
It has not only been sufficiently reserved brightness and the contrast information of image in the application, also highlighted details (edge, texture in image
Deng) information, the interpretation capability of image is greatly improved.
It should be understood by those skilled in the art that, a specific embodiment of the invention can provide as method, system or calculate
Machine program product.Therefore, the present invention can be used complete hardware specific embodiment, complete software specific embodiment or combine
The form of specific embodiment in terms of software and hardware.Moreover, it wherein includes meter that the present invention, which can be used in one or more,
Computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, the optical memory of calculation machine usable program code
Deng) on the form of computer program product implemented.
The present invention is referring to the method for specific embodiment, equipment (system) and computer program product according to the present invention
Flowchart and/or the block diagram describe.It should be understood that can be realized by computer program instructions in flowchart and/or the block diagram
The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating
Processing of the machine program instruction to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices
Device is to generate a machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute
For realizing the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram
Device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects
It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention
Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include
Within protection scope of the present invention.
Claims (10)
1. a kind of image interfusion method, wherein this method comprises:
Obtain the visible images and infrared light image of Same Scene;
It decomposes visible images and obtains the first high frequency subgraph and the first low frequency subgraph picture, and decompose infrared light image and obtain the
Two high frequency subgraphs and the second low frequency subgraph picture;
The high frequency blending image of scene is generated according to the first high frequency subgraph and the second high frequency subgraph, and according to the first low frequency
Subgraph and the second low frequency subgraph picture generate the low frequency blending image of scene;
The blending image of scene is calculated according to the low frequency blending image of scene and high frequency blending image.
2. image interfusion method as described in claim 1, wherein described according to the first low frequency subgraph picture and the second low frequency subgraph
Low frequency blending image as generating scene, is calculated according to the following formula:
I=Al(row,col).*(Al(row, col) <=Bl(row,col))+Bl(row,col).*(Al(row, col) > Bl
(row,col));
Dl=Al+Bl-I;
Wherein, I indicates the redundancy of the low frequency blending image of scene;L indicates low frequency;DlIndicate the low frequency blending image of scene;
AlIndicate the first low frequency subgraph picture;BlIndicate the second low frequency subgraph picture;(row, col) indicates location of pixels.
3. image interfusion method as described in claim 1, wherein described according to the first high frequency subgraph and the second high frequency subgraph
High frequency blending image as generating scene, comprising:
First high frequency subgraph and the second high frequency subgraph are divided into multiple corresponding regions;
Calculate the Deng Shi degree of association coefficient of each corresponding region of the first high frequency subgraph and the second high frequency subgraph;
The first high frequency subgraph and the second high frequency of each corresponding region are merged according to the Deng Shi degree of association coefficient of each corresponding region
Image generates the high frequency blending image of each corresponding region;
The high frequency blending image of scene is calculated according to the high frequency blending image of each corresponding region.
4. image interfusion method as claimed in claim 3, wherein described to be melted according to the Deng Shi degree of association coefficient of each corresponding region
The first high frequency subgraph and the second high frequency subgraph for closing each corresponding region generate the high frequency blending image of each corresponding region, packet
It includes:
It is respectively compared the Deng Shi degree of association coefficient and given threshold of each corresponding region;
If the Deng Shi degree of association coefficient of corresponding region be greater than given threshold, using Deng Shi degree of association coefficient between corresponding region as add
Weigh the high frequency blending image of weight computing corresponding region;If Deng Shi degree of association coefficient is less than given threshold, root between corresponding region
The high frequency blending image of big criterion calculating corresponding region is taken according to region energy.
5. image interfusion method as claimed in claim 4, wherein set if the Deng Shi degree of association coefficient of the corresponding region is greater than
Determine threshold value, then using Deng Shi degree of association coefficient between corresponding region as the high frequency blending image of weighting weight computing corresponding region;If
Deng Shi degree of association coefficient is less than given threshold between corresponding region, then the high frequency of big criterion calculating corresponding region is taken according to region energy
Blending image is calculated according to the following formula:
Wherein, RDIndicate Deng Shi degree of association coefficient;Th ∈ (0,1) indicates given threshold;Indicate a pair of on i-th layer of jth direction
Answer the high frequency blending image in region;I expression layer serial number;J indicates direction serial number;H indicates high frequency.
6. image interfusion method as described in claim 1, wherein the decomposition visible images obtain the first high frequency subgraph
The second high frequency subgraph and the second low frequency subgraph picture are obtained with the first low frequency subgraph picture, and decomposition infrared light image, further
Include: that multilayer song Wave Decomposition is carried out to visible images, obtains the first high frequency subgraph and the first low frequency subgraph picture, and to red
Outer light image carries out multilayer song Wave Decomposition, obtains the second high frequency subgraph and the second low frequency subgraph picture.
7. image interfusion method as described in claim 1, wherein described to be merged according to the low frequency blending image and high frequency of scene
Image calculates the blending image of scene, further comprises: low frequency blending image and high frequency blending image march wave to scene
Transform operation obtains the blending image of scene.
8. image interfusion method as described in claim 1, wherein in the visible images and infrared light figure for obtaining Same Scene
When picture, registration process is carried out to the visible images and infrared light image.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, wherein the processor realizes any the method for claim 1 to 8 when executing the computer program.
10. a kind of computer readable storage medium, wherein the computer-readable recording medium storage has perform claim to require 1
To the computer program of 8 any the methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810751977.5A CN109064436A (en) | 2018-07-10 | 2018-07-10 | Image interfusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810751977.5A CN109064436A (en) | 2018-07-10 | 2018-07-10 | Image interfusion method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109064436A true CN109064436A (en) | 2018-12-21 |
Family
ID=64819431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810751977.5A Pending CN109064436A (en) | 2018-07-10 | 2018-07-10 | Image interfusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109064436A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070664A (en) * | 2020-07-31 | 2020-12-11 | 华为技术有限公司 | Image processing method and device |
CN112258442A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image fusion method and device, computer equipment and storage medium |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113538303A (en) * | 2020-04-20 | 2021-10-22 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN113628151A (en) * | 2021-08-06 | 2021-11-09 | 苏州东方克洛托光电技术有限公司 | Infrared and visible light image fusion method |
WO2023137956A1 (en) * | 2022-01-18 | 2023-07-27 | 上海闻泰信息技术有限公司 | Image processing method and apparatus, electronic device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327459A (en) * | 2016-09-06 | 2017-01-11 | 四川大学 | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) |
CN107341786A (en) * | 2017-06-20 | 2017-11-10 | 西北工业大学 | The infrared and visible light image fusion method that wavelet transformation represents with joint sparse |
CN107945149A (en) * | 2017-12-21 | 2018-04-20 | 西安工业大学 | Strengthen the auto Anti-Blooming Method of IHS Curvelet conversion fusion visible ray and infrared image |
-
2018
- 2018-07-10 CN CN201810751977.5A patent/CN109064436A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327459A (en) * | 2016-09-06 | 2017-01-11 | 四川大学 | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) |
CN107341786A (en) * | 2017-06-20 | 2017-11-10 | 西北工业大学 | The infrared and visible light image fusion method that wavelet transformation represents with joint sparse |
CN107945149A (en) * | 2017-12-21 | 2018-04-20 | 西安工业大学 | Strengthen the auto Anti-Blooming Method of IHS Curvelet conversion fusion visible ray and infrared image |
Non-Patent Citations (1)
Title |
---|
杨艺 和 刘媛俐: "一种可见光与红外光图像融合算法的研究与仿真", 《2016第11届中国系统建模与仿真技术高层论坛 论文集》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538303A (en) * | 2020-04-20 | 2021-10-22 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN113538303B (en) * | 2020-04-20 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN112070664A (en) * | 2020-07-31 | 2020-12-11 | 华为技术有限公司 | Image processing method and device |
WO2022022288A1 (en) * | 2020-07-31 | 2022-02-03 | 华为技术有限公司 | Image processing method and apparatus |
CN112070664B (en) * | 2020-07-31 | 2023-11-03 | 华为技术有限公司 | Image processing method and device |
CN112258442A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image fusion method and device, computer equipment and storage medium |
WO2022100250A1 (en) * | 2020-11-12 | 2022-05-19 | Oppo广东移动通信有限公司 | Method and apparatus for image fusion, computer device and storage medium |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113079325B (en) * | 2021-03-18 | 2023-01-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113628151A (en) * | 2021-08-06 | 2021-11-09 | 苏州东方克洛托光电技术有限公司 | Infrared and visible light image fusion method |
CN113628151B (en) * | 2021-08-06 | 2024-04-26 | 苏州东方克洛托光电技术有限公司 | Infrared and visible light image fusion method |
WO2023137956A1 (en) * | 2022-01-18 | 2023-07-27 | 上海闻泰信息技术有限公司 | Image processing method and apparatus, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109064436A (en) | Image interfusion method | |
CN111292264B (en) | Image high dynamic range reconstruction method based on deep learning | |
CN112183788A (en) | Domain adaptive equipment operation detection system and method | |
CN109029381A (en) | A kind of detection method of tunnel slot, system and terminal device | |
Zhao et al. | Pyramid global context network for image dehazing | |
Yuan et al. | Image haze removal via reference retrieval and scene prior | |
CN112419327A (en) | Image segmentation method, system and device based on generation countermeasure network | |
CN110418139B (en) | Video super-resolution restoration method, device, equipment and storage medium | |
CN114998830A (en) | Wearing detection method and system for safety helmet of transformer substation personnel | |
CN111325697B (en) | Color image restoration method based on tensor eigen transformation | |
Wu et al. | Detection of foreign objects intrusion into transmission lines using diverse generation model | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
CN114462486A (en) | Training method of image processing model, image processing method and related device | |
CN110175963A (en) | It is a kind of suitable for underwater picture and the dual-purpose image enchancing method and device of the dark image of atmosphere | |
Akhyar et al. | A beneficial dual transformation approach for deep learning networks used in steel surface defect detection | |
CN116824488A (en) | Target detection method based on transfer learning | |
CN113902744B (en) | Image detection method, system, equipment and storage medium based on lightweight network | |
Hepburn et al. | Enforcing perceptual consistency on generative adversarial networks by using the normalised laplacian pyramid distance | |
CN105956606A (en) | Method for re-identifying pedestrians on the basis of asymmetric transformation | |
CN116091331A (en) | Haze removing method and device for vehicle-mounted video of high-speed railway | |
Liang et al. | Multi-scale and multi-patch transformer for sandstorm image enhancement | |
Liu et al. | HPL-ViT: A Unified Perception Framework for Heterogeneous Parallel LiDARs in V2V | |
CN108364256A (en) | A kind of image mosaic detection method based on quaternion wavelet transformation | |
Sengar et al. | Multi-task learning based approach for surgical video desmoking | |
Kumar et al. | Underwater Image Enhancement using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181221 |