CN106971385B - A kind of aircraft Situation Awareness multi-source image real time integrating method and its device - Google Patents
A kind of aircraft Situation Awareness multi-source image real time integrating method and its device Download PDFInfo
- Publication number
- CN106971385B CN106971385B CN201710203953.1A CN201710203953A CN106971385B CN 106971385 B CN106971385 B CN 106971385B CN 201710203953 A CN201710203953 A CN 201710203953A CN 106971385 B CN106971385 B CN 106971385B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- sen
- parameter
- transformation parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000009466 transformation Effects 0.000 claims abstract description 55
- 230000004927 fusion Effects 0.000 claims abstract description 46
- 230000008859 change Effects 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 11
- 238000002156 mixing Methods 0.000 claims abstract description 8
- 101000984722 Bos taurus Pancreatic trypsin inhibitor Proteins 0.000 claims description 13
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of aircraft Situation Awareness multi-source image real time integrating method and its devices, including image input module, and the image of input is carried out frame synchronization process by FPGA module, and is forwarded to DSP registration module;Change of scale parameter S between DSP registration module calculating input image, and corresponding matching module is matched according to S, and obtain transformation parameter Tfin;FPGA module converts image subject to registration by transformation parameter Tfin, and transformed image is decomposed by FAMBED decomposing module, obtains multiple frequency layers;Calculating is reconstructed in frequency layer by FAMBED reconstructed module, exports blending image.It solves the problems, such as air craft carried multi-source image real time fusion, enriches the information content of lower blit picture, reduce and pass bandwidth under image, improve the validity of subsequent image processing.
Description
Technical field
The invention belongs to image co-registration process fields, are related to a kind of aircraft Situation Awareness multi-source image real time fusion side
Method;Further relate to a kind of device of aircraft Situation Awareness multi-source image real time fusion.
Background technique
With the development of sensor technology, aircraft Situation Awareness System mostly contains multiple sensors to same target area
Domain is perceived, such as infrared sensor, visible light sensor.The increase of the array of sensor is brought to the real-time Transmission of data
Certain difficulty, and in different environments, the target information that single-sensor obtains is limited, and the datum target information of upload contains
Measure it is less, and multi-source image integration technology become solve this problem key.Multi-source image registration with to merge be multi-source image
Key technology in emerging system, directly decides the success or failure of fusion, the input picture merged in existing patent be mostly by
The image that the image or dimensional variation of registration are little or geometrical characteristic is obvious, and realized in earth station.
It is difficult that image registration also used in existing image fusion technology highlights target, region segmentation, Gradient Correlation Method
With solve dimensional variation it is larger under the conditions of heterologous image registration, such as the random zoom of camera;In addition, also directly adopt SIFT or
The methods of SURF, but the above method is still difficult to realize the heterologous biggish registration of image grayscale texture difference, it is such as infrared with it is visible
The registration of light, and computation complexity is higher is difficult to realize airborne built-in real time image registration, not can solve airborne multi-source image
Real time fusion problem.
The patent of invention of Publication No. CN104574332A also proposed a kind of airborne photoelectric gondola image interfusion method,
Linkage is carried out on the basis of the focal length value of Visible imaging system and is calculated the respective focal value of infrared image, and then by focal length
It is mapped as change of scale, and translation parameters is compensated, realizes the real time fusion of image.This method is to imaging system focal length
Required precision it is high (error < 0.001m), be just able to achieve the image registration of sub-pix, and can not anti-rotation transformation, it is difficult to
Solve the problems, such as the real-time Precise fusion of large scale transformation and rotation transformation.
Summary of the invention
The purpose of the present invention is to provide a kind of aircraft Situation Awareness multi-source image real time integrating methods;It solves winged
The airborne multi-source image real time fusion problem of row device enriches the information content of lower blit picture, reduces and pass bandwidth under image, improve subsequent figure
As the validity of processing.
The object of the invention is also to provide the device of above-mentioned aircraft Situation Awareness multi-source image real time fusion, the dresses
Set can be realized image registration when visible light and large scale caused by the random zoom of infrared camera change with merge, realize daytime
Night whole day visible light and the biggish image registration of infrared gray difference with merge, while the device can be realized under flight environment of vehicle
The real time fusion of airborne multi-source image data.
The purpose of the present invention is achieved through the following technical solutions:
This aircraft Situation Awareness multi-source image real time integrating method, comprising the following steps:
Step 1, using visible images as benchmark image, transformation parameter Tfin is obtained by method for registering images, and right
Infrared image is converted, and realizes the interpolation of image grayscale, obtains transformed infrared image;
Step 2, it will be seen that light image extracts I component from color space transformation to the space HSI;
Step 3, it will be seen that the I component of light image, transformed infrared image decompose not by FAMBED decomposition algorithm
In same frequency layer;
Step 4, fusion treatment is carried out to the frequency layer of decomposition, wherein low-frequency image layer is using weighted average fusion, high frequency
Image layer uses the maximization Weighted Fusion of decomposition value;
Step 5, by FAMBED reconstruct and HSI inverse transformation, blending image is obtained.
Further, the features of the present invention also characterized in that:
Wherein the calculating step of transformation parameter Tfin is in step 1: step 1.1, to the infrared image that acquires in real time and can
Light-exposed image carries out frame synchronization process;Step 1.2, the dimensional variation parameter S of infrared image and visible images is calculated, wherein S
=IVF*PsIR/IRF/PsIV, IVF are the focal length of Visible Light Camera, and PsIV is the pixel dimension of Visible Light Camera, and IRF is red
The focal length of outer camera, PsIR are the pixel dimension of infrared camera;Step 1.3, it when S < 3&S > 0.3, is matched using TopN-SURF
Otherwise algorithm uses band logical texture information matching algorithm;Step 1.4, it is determined according to corresponding matching algorithm and output transform is joined
Number Tfin.
Wherein in step 1.3 TopN-SURF matching algorithm the following steps are included: step 1.3.1, extracts multiple characteristic points,
And the mass space for establishing characteristic point and most rickle binary tree are ranked up picking characteristic point Hessian response;Step
1.3.2, the initial transformation parameter Tor between image is determined, according to the rotation parameter in initial transformation parameter to image to be matched
It is rotated;Step 1.3.3 passes through S=IVF*PsIR/IRF/PsIV according to the real-time focal length of image and pixel dimension information
The dimensional variation parameter S between image is calculated, and then characteristic matching regional scope is defined.
Wherein band logical texture information matching algorithm is the following steps are included: step 1.3.a in step 1.3, according to dimensional variation
Parameter S and initial transformation parameter Tor calculates transformation parameter Tnew;Step 1.3.b, using transformation parameter Tnew to figure subject to registration
As I_sen is converted, transformed image TI_sen is obtained, wherein TI_sen=I_sen*Tnew;Step 1.3.c, respectively
Not homoscedastic gaussian filtering is carried out to a certain region of image TI_sen after reference picture I_ref and transformation and carries out band logical texture
Information extraction respectively obtains BP_ref and BP_sen;Step 1.3.d calculates the related coefficient Corr of BP_ref and BP_sen, and
Determine final translation transformation parameter Tran_xf and Tran_yf;Step 1.3.e, output transform parameter Tfin.
Wherein FAMBED decomposes visible light the following steps are included: step 3.1 in step 3, it will be seen that light image IV is extracted
I component be assigned to initial pictures IVI, and determine statistics order according to the size of filtering window;Step 3.2, to initial pictures
IVI carries out maximum/small filtering, and smooth, obtains greatly/small envelope curved surface UEAnd LE, and obtain average envelope curved surface ME;Step
Rapid 3.3, by initial pictures IVI and average envelope face MEDifference obtains first layer high fdrequency component IMF1, and by average envelope face MEIt passes
Return input terminal;Step 3.4, step 3.1-3.3 will be repeated, obtains second layer high fdrequency component IMF2, third layer high fdrequency component IMF3With
Low frequency component Residue.
Wherein FAMBED is reconstructed in step 5 are as follows: obtained fused frequency layer is summed.
Another technical solution of the invention is a kind of device of aircraft Situation Awareness multi-source image real time fusion, packet
Image input module is included, the image of input is carried out frame synchronization process by FPGA module, and is forwarded to DSP registration module;DSP registration
Change of scale parameter S between module calculating input image, and corresponding matching module is matched according to S, and obtain transformation parameter
Tfin;FPGA module converts image subject to registration by transformation parameter Tfin, and transformed image is decomposed by FAMBED
Module is decomposed, and multiple frequency layers are obtained;Calculating is reconstructed in frequency layer by FAMBED reconstructed module, exports blending image.
Further, the features of the present invention also characterized in that:
Wherein FAMBED decomposing module includes memory, and memory is connect with memory scheduler, memory scheduler with
The connection of envelope surface computing module, envelope surface computing module are connect with output module.
Wherein FAMBED reconstructed module includes memory, and memory is connect with memory scheduler, memory scheduler with
The connection of fusion calculation module, fusion calculation module are connect with output module.
Wherein DSP registration module deposit index transformation parameter S passes through TopN-SURF matching module, meter between 0.3-3
When calculating matching double points > 8, transformation parameter Tfin is sent to FPGA module;Change of scale parameter S < 0.3 or S > 3 or calculating
When with point to < 8, transformation parameter Tfin is sent to by FPGA module by BPTI matching module.
Compared with prior art, the beneficial effects of the present invention are: solving big geometry caused by the camera Real time changing focus
The real time fusion problem of multi-source image under conditions of grey scale change caused by deformation, environmental change.This method is special using optimization
The quick self-adapted two-dimensional empirical mould of registration technique, optimization that sign description TopN-SURF is combined with band logical texture information BPTI
Formula decomposes FAMBED integration technology, and carries out embedded realization to above-mentioned technology, is carried out using the system to live flying data
Test, realizes the image real time fusion of visible light and infrared camera under different focal length.
Further, TopN-SURF matching algorithm keeps image texture relatively abundant, when dimensional variation is between 0.3~3,
It can be achieved real-time to be registrated the embedded of visible images (1024 × 768) and infrared image (640 × 512).
Beneficial effects of the present invention also reside in: the registration technique combined with band logical texture information is described using feature, it can
Dimensional variation is realized within 20 times, any rotate is matched in real time with the 1080P visible images of translation and the airborne of infrared image
It is quasi- with merge (30f/S), fused image can play the features such as infrared image is sensitive, penetration capacity is strong to thermal target, and
It is able to maintain visible images contrast height, the apparent advantage of target detail.The device can be applied to double light non co axial imaging systems
In, it is seen with the target during solving the problems, such as the heterologous monitor video transmission bandwidth deficiency of multichannel, solving observed terminals switching camera
The continuity problem examined.
Present invention can apply to the more of the military issue weapons such as military reconnaissance plane, unmanned plane, unmanned airship, guided missile, aerial bomb
Source images fusion perception provides reliable and stable information source for subsequent profound image procossing.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is image registration flow chart in the present invention;
Fig. 3 is the embedded implementation flow chart of apparatus of the present invention;
Fig. 4 is the work flow diagram of DSP registration module in the present invention;
Fig. 5 is the decomposition process figure of FAMBED in the present invention;
Fig. 6 is the schematic diagram of FAMBED decomposing module in the present invention;
Fig. 7 is the schematic diagram of FAMBED reconstructed module in the present invention;
Fig. 8 is the input picture of first group of embodiment of the invention;
Fig. 9 is the matching result schematic diagram of first group of embodiment TopN-SURF algorithm of the invention;
The fusion results schematic diagram of Figure 10 first group of embodiment of the present invention;
Figure 11 is the input picture for inventing second group of embodiment;
Figure 12 is the band logical texture image for inventing second group of embodiment BPTI algorithm correlation maximum;
Figure 13 is the fusion results schematic diagram for inventing second group of embodiment;
Figure 14 is the input picture of third group embodiment of the present invention;
Figure 15 is the band logical texture image of third group embodiment BPTI algorithm correlation maximum of the present invention;
Figure 16 is the fusion results schematic diagram of third group embodiment of the present invention.
Specific embodiment
The invention will be described in further detail with reference to the accompanying drawing:
The present invention provides a kind of aircraft Situation Awareness multi-source image real time integrating method, detailed process such as Fig. 1
It is shown, comprising the following steps:
Step 1, using visible images as benchmark image, transformation parameter Tfin is obtained by method for registering images, and right
Infrared image is converted, and realizes the interpolation of image grayscale, obtains transformed infrared image;As shown in Fig. 2, transformation parameter
The calculating step of Tfin is: step 1.1, carrying out frame synchronization process to the infrared image and visible images acquired in real time;Step
1.2, the dimensional variation parameter S of infrared image and visible images is calculated, wherein
S=IVF*PsIR/IRF/PsIV (1)
IVF is the focal length of Visible Light Camera, and PsIV is the pixel dimension of Visible Light Camera, and IRF is the focal length of infrared camera,
PsIR is the pixel dimension of infrared camera;Step 1.3, it when S < 3&S > 0.3, using TopN-SURF matching algorithm, otherwise uses
Band logical texture information matching algorithm;Step 1.4, simultaneously output transform parameter Tfin is determined according to corresponding matching algorithm.
TopN-SURF matching algorithm in step 1.3 extracts multiple characteristic points the following steps are included: step 1.3.1, and
The mass space for establishing characteristic point and most rickle binary tree are ranked up picking characteristic point Hessian response;Step
1.3.2, the initial transformation parameter Tor between image is determined, according to the rotation parameter in initial transformation parameter to image to be matched
It is rotated;Step 1.3.3 passes through S=IVF*PsIR/IRF/PsIV according to the real-time focal length of image and pixel dimension information
The dimensional variation parameter S between image is calculated, and then characteristic matching regional scope is defined.
In step 1.3, band logical texture information matching algorithm is joined the following steps are included: step 1.3.a according to dimensional variation
Number S and initial transformation parameter Tor, it is assumed that
Then reference picture, image subject to registration and image subject to registration transformation after image centre coordinate be respectively (Row_sen,
Col_sen), (Row_ref, Col_ref) and (Row_new, Col_new) updates translation parameters Tran_x and Tran_y difference
Are as follows:
Tran_x=Col_ref- (Col_sen*t11+Row_sen*t21)/Sor (3),
Tran_y=Row_ref- (Col_sen*t12+Row_sen*t22)/Sor (4),
WhereinIt can thus be concluded that the transformation parameter Tnew updated are as follows:
Step 1.3.b converts image I_sen subject to registration using transformation parameter Tnew, obtains transformed image
TI_sen, wherein TI_sen=I_sen*Tnew;Step 1.3.c, respectively to image TI_sen after reference picture I_ref and transformation
A certain region carry out not homoscedastic gaussian filtering and carry out the extraction of band logical texture information, respectively obtain BP_ref and BP_sen,
Wherein
BP_ref=Gauss (δ1)*I_ref-Gauss(δ2) * I_ref (6),
BP_sen=Gauss (δ1)*TI_sen-Gauss(δ2)*TI_sen (7);
Step 1.3.d changes the shake in parameter Tran_x and Tran_y progress (- 10,10) range to translation, calculates BP_
The related coefficient Corr of ref and BP_sen:
Corr=corr2 (BP_ref, BP_sen) (8),
And determine final translation transformation parameter Tran_xf and Tran_yf;Step 1.3.e, output transform parameter Tfin,
Step 2, it will be seen that light image RGB extracts I component from color space transformation to the space HSI;
Step 3, it will be seen that the I component of light image, transformed infrared image decompose not by FAMBED decomposition algorithm
In same frequency layer;Wherein FAMBED decomposes visible light the following steps are included: step 3.1, it will be seen that the I that light image IV is extracted
Component is assigned to initial pictures IVI, and determines statistics order according to the size of filtering window;Step 3.2, to initial pictures IVI
Maximum/small filtering is carried out, and smooth, obtains greatly/small envelope curved surface UEAnd LE, and obtain average envelope curved surface ME;Step
3.3, by initial pictures IVI and average envelope face MEDifference obtains first layer high fdrequency component IMF1, and by average envelope face MEIt passes back
Input terminal;Step 3.4, step 3.1-3.3 will be repeated, obtains second layer high fdrequency component IMF2, third layer high fdrequency component IMF3With it is low
Frequency component Residue.
Step 4, fusion treatment is carried out to the frequency layer of decomposition, wherein low-frequency image layer is using weighted average fusion, high frequency
Image layer uses the maximization Weighted Fusion of decomposition value;
Step 5, it is reconstructed, i.e., obtained fused frequency layer is summed by FAMBED, then carry out HSI inverse transformation,
Obtain blending image.
The present invention also provides a kind of device of aircraft Situation Awareness multi-source image real time fusion, which is based on upper
Method design is stated, as shown in figure 3, image input module includes visible images input module Vimg (i) and infrared image input
Module Fimg (i), the visible images and infrared image that image input module will acquire are input in FPGA module, FPGA mould
The image of input is carried out frame synchronization process by block, and image is sent to DSP registration module by treated;DSP registration module meter
The change of scale parameter S between input picture is calculated, the workflow of DSP registration module as shown in Figure 4 is: the judgement of DSP registration module
S is between 0.3-3 for change of scale parameter, by TopN-SURF matching module, when calculating matching double points > 8, by transformation parameter
Tfin is sent to FPGA module;Change of scale parameter S < 0.3 or S > 3 or when calculating matching double points < 8, matches mould by BPTI
Transformation parameter Tfin is sent to FPGA module by block;FPGA module converts image subject to registration by transformation parameter Tfin,
Transformed image is decomposed by FAMBED decomposing module, obtains multiple frequency layers;FAMBED reconstructed module is by frequency layer
Calculating is reconstructed, exports blending image.
As shown in fig. 6, FAMBED decomposing module includes memory, memory is connect with memory scheduler, memory tune
Degree device is connect with envelope surface computing module, and envelope surface computing module is connect with output module.
As shown in fig. 7, FAMBED reconstructed module includes memory, memory is connect with memory scheduler, memory tune
Degree device is connect with fusion calculation module, and fusion calculation module is connect with output module.
As shown in figure 5, visible images and infrared image resolve into multiple frequency layers by FAMBED decomposing module, including
First layer high fdrequency component IMF1, second layer high fdrequency component IMF2... n-th layer high fdrequency component IMFnWith low frequency component Residue;So
FAMBED reconstructed module will obtain blending image after its fused frequency layer summation afterwards.
A kind of device of aircraft Situation Awareness multi-source image real time fusion of the present invention, it is real using DSP+FPGA framework
It is existing, externally interconnected by connector and external system.Cable need to carry out multiple twin and shielding processing;Outputting and inputting for image passes through
LVDS chip realizes that DSP registration module is realized by TMS320C6713 chip, mutual by UPP interface between FPGA module
Connection, transmission rate can reach 70M × 8bit, meet the requirement of real-time Transmission image, SPI interface be used for as DSP registration module and
Registration parameter is transmitted between FPGA module, realizes registration and the decomposition of infrared image.
The coordinate transform of image, the synchronization of image, to DSP registration module send/receive data, export image control and
Timing generation is realized by switching FPGA module.
Input power requires to be 16V-36V, the LT3680 chip inputted using 4.5V-36V, maximum output current ability
For 3A.
As shown in figure 4, the workflow of DSP registration module are as follows: a initialization register configures UPP interface, RS422 string
Mouthful, SPI interface interrupts, application caching;B opens UPP and receives the infrared image and visible images data that FPGA module is sent,
Read the focus information of infrared image and visible images, the continuous constant progress step d of 5 frame image focal length;C is calculated according to focal length
Whether scale factor is to carry out step d, otherwise carries out step e between 0.3~3;D runs ToPN-SURF registration Algorithm, sentences
Break and be effectively matched to whether being greater than 8, be to carry out step f, otherwise operating procedure e;E runs BPTI registration Algorithm;F by SPI to
FPGA sends registration parameter.
Specific embodiments of the present invention are as follows:
To the embodiment that 3 groups of truthful datas of certain unmanned plane acquisition are handled, first group of data is that dimensional variation is little,
Texture information relative abundance, emerging system are registrated using ToPN-SURF, and are carried out fusion using FABEMD and provided result;
Second group of data scale changes greatly and texture information is clear, and emerging system realizes image registration using BPTI, then utilizes
FABEMD provides fusion results;Second group of data dimensional variation is less but texture information is more fuzzy, and emerging system is in ToPN-
SURF uses BPTI to realize image registration in the case where failing, and then realizes fusion using FABEMD.
First group of embodiment
As shown in figure 8, the dimensional variation of two images is little and texture information is abundant, wherein scheming in first group of embodiment
8a is visible images, and Fig. 8 b is infrared image.
According to the implementation process of emerging system, DSP is received after image first according to the focal length parameter IVF=of image
PsIR=15 μm of size of 6.5mm, IRF=25mm and sensor pixels, PsIV=4.75 μm, by equation (1) calculate image it
Between change of scale S=0.821, according to decision rule it will be seen that light is as reference picture (I_ref), infrared is image subject to registration
(I_sen), then system calls ToPN-SURF algorithm, and the matching result being calculated is as shown in figure 9, matching double points NCM=130
Greater than 8, therefore system carries out subsequent fusion calculation using obtained transformation parameter Tfin.
The parameter Tfin that the FPGA of emerging system is provided using DSP, converts reference picture:
TI_sen=I_sen*Tfin, and fusion treatment is carried out to I_ref and TI_sen using FABEMD algorithm, finally melt
The results are shown in Figure 10 for conjunction.
Second group of embodiment
As shown in figure 11, the dimensional variation of second group of embodiment two images is larger, and wherein Figure 11 a is visible images,
Figure 11 b is infrared image.The feature description vectors of ToPN-SURF algorithm differ greatly, it is difficult to realize accurate matching.According to fusion
Method, DSP are received after image first according to focal length the parameter IVF=130.2mm, IRF=44.3mm and sensor picture of image
PsIR=15 μm of size of member, calculates the change of scale S=9.28 between image by equation (1), according to sentencing by PsIV=4.75 μm
Certainly criterion is used as reference picture I_ref for infrared, it is seen that light is image I_sen subject to registration, then system calls BPTI method, is calculated
It is as shown in figure 12 to obtain correlation maximum band logical hum pattern, wherein Figure 12 a be can be by light image region, Figure 12 b is infrared figure
As certain corresponding region.
The parameter Tfin that the FPGA of emerging system is provided using DSP operation BPTI algorithm, converts reference picture:
TI_sen=I_sen*Tfin, and fusion treatment is carried out to I_ref and TI_sen using FABEMD algorithm, finally melt
It is as shown in figure 13 to close result.
Third group embodiment
As shown in figure 14, the dimensional variation of the two images of the input of third group embodiment is little, but texture information is inadequate
Abundant, wherein Figure 14 a is visible images, and Figure 14 b is infrared image.
According to fusion method, DSP is received after image first according to focal length the parameter IVF=130.2mm, IRF=of image
PsIR=15 μm of size of 300mm and sensor pixels, calculates the scale between image by equation (1) and becomes by PsIV=4.75 μm
S=1.37 is changed, reference picture I_ref is used as infrared according to decision rule, it is seen that light is image I_sen subject to registration, and system is adopted
The correct match point logarithm obtained with ToPN-SURF algorithm is less than 8, and therefore, system calls BPTI matching algorithm automatically, calculates
It is as shown in figure 15 to obtain correlation maximum band logical hum pattern, wherein Figure 15 a is visible images region, and Figure 15 b is infrared figure
As certain corresponding region.
The parameter Tfin that emerging system FPGA is provided using the BPTI of DSP, converts reference picture:
TI_sen=I_sen*Tfin, and fusion treatment is carried out to I_ref and TI_sen using FABEMD algorithm, finally melt
It is as shown in figure 16 to close result.
Claims (6)
1. a kind of aircraft Situation Awareness multi-source image real time integrating method, which comprises the following steps:
Step 1, using visible images as benchmark image, the first transformation parameter Tfin is obtained by method for registering images, and right
Infrared image is converted, and realizes the interpolation of image grayscale, obtains transformed infrared image;The meter of first transformation parameter Tfin
Calculating step is:
Step 1.1, frame synchronization process is carried out to the infrared image and visible images acquired in real time;
Step 1.2, the dimensional variation parameter S of infrared image and visible images is calculated, wherein S=IVF*PsIR/IRF/PsIV,
IVF is the focal length of Visible Light Camera, and PsIV is the pixel dimension of Visible Light Camera, and IRF is the focal length of infrared camera, and PsIR is red
The pixel dimension of outer camera;
Step 1.3, when S < 3&S > 0.3, using TopN-SURF matching algorithm, band logical texture information matching algorithm is otherwise used;
TopN-SURF matching algorithm the following steps are included:
Step 1.3.1 extracts multiple characteristic points, and the mass space for establishing characteristic point and most rickle binary tree, to picking spy
Sign point Hessian response is ranked up;
Step 1.3.2 determines the initial transformation parameter Tor between image, is treated according to the rotation parameter in initial transformation parameter
Matching image is rotated;
Step 1.3.3 is calculated by S=IVF*PsIR/IRF/PsIV and is schemed according to the real-time focal length of image and pixel dimension information
Dimensional variation parameter S as between, and then characteristic matching regional scope is defined;
Step 1.4, determining according to corresponding matching algorithm and the first transformation parameter Tfin of output;
Band logical texture information matching algorithm the following steps are included:
Step 1.3.a calculates the second transformation parameter Tnew according to dimensional variation parameter S and initial transformation parameter Tor;
Step 1.3.b converts image I_sen subject to registration using the second transformation parameter Tnew, obtains transformed image
TI_sen, wherein TI_sen=I_sen*Tnew;
Step 1.3.c carries out band logical texture information by gaussian filtering to a certain region of reference picture I_ref and extracts to obtain
BP_ref:BP_ref=Gauss (δ1)*I_ref-Gauss(δ2)*I_ref;It is logical to a certain region of image TI_sen after transformation
Gaussian filtering progress band logical texture information is crossed to extract to obtain BP_sen:BP_sen=Gauss (δ1)*TI_sen-Gauss(δ2)*
TI_sen;
Step 1.3.d calculates the related coefficient Corr of BP_ref and BP_sen, and determines final translation transformation parameter Tran_
Xf and Tran_yf;
Step 1.3.e exports the first transformation parameter Tfin
Step 2, it will be seen that light image extracts I component from color space transformation to the space HSI;
Step 3, to will be seen that the I component of light image, transformed infrared image decompose by FAMBED decomposition algorithm different
In frequency layer;FAMBED decompose visible light the following steps are included:
Step 3.1, it will be seen that the I component that light image IV is extracted is assigned to initial pictures IVI, and according to the size of filtering window
Determine statistics order;
Step 3.2, maximum/small filtering is carried out to initial pictures IVI, and smooth, obtains greatly/small envelope curved surface UEAnd LE, and
Obtain average envelope curved surface ME;
Step 3.3, by initial pictures IVI and average envelope curved surface MEDifference obtains first layer high fdrequency component IMF1, and will averagely wrap
Network curved surface MEPass input terminal back;
Step 3.4, step 3.1-3.3 is repeated, second layer high fdrequency component IMF is obtained2, third layer high fdrequency component IMF3With low frequency point
Measure Residue;
Step 4, fusion treatment is carried out to the frequency layer of decomposition, wherein low-frequency image layer is using weighted average fusion, high frequency imaging
Layer uses the maximization Weighted Fusion of decomposition value;
Step 5, by FAMBED reconstruct and HSI inverse transformation, blending image is obtained.
2. aircraft Situation Awareness multi-source image real time integrating method according to claim 1, which is characterized in that described
FAMBED is reconstructed in step 5 are as follows: obtained fused frequency layer is summed.
3. a kind of aircraft situation sense based on the multi-source image real time integrating method of aircraft Situation Awareness described in claim 1
Know the device with multi-source image real time fusion, which is characterized in that including image input module;FPGA module by the image of input into
Row frame synchronization process, and it is forwarded to DSP registration module;Change of scale parameter S between DSP registration module calculating input image, and
Corresponding matching module is matched according to S, and obtains the first transformation parameter Tfin;FPGA module passes through Tfin pairs of the first transformation parameter
Image subject to registration is converted, and transformed image is decomposed by FAMBED decomposing module, obtains multiple frequency layers;
Calculating is reconstructed in frequency layer by FAMBED reconstructed module, exports blending image.
4. the device of aircraft Situation Awareness multi-source image real time fusion according to claim 3, which is characterized in that institute
Stating FAMBED decomposing module includes memory, and memory is connect with memory scheduler, memory scheduler and envelope surface meter
Module connection is calculated, envelope surface computing module is connect with output module.
5. the device of aircraft Situation Awareness multi-source image real time fusion according to claim 3, which is characterized in that institute
Stating FAMBED reconstructed module includes memory, and memory is connect with memory scheduler, memory scheduler and fusion calculation mould
Block connection, fusion calculation module are connect with output module.
6. the device of aircraft Situation Awareness multi-source image real time fusion according to claim 3, which is characterized in that institute
DSP registration module deposit index transformation parameter S is stated between 0.3-3, by TopN-SURF matching module, calculate matching double points >
When 8, the first transformation parameter Tfin is sent to FPGA module;Change of scale parameter S < 0.3 or S > 3 calculates matching double points < 8
When, band logical texture information matching algorithm is run by band logical texture information BPTI matching module and sends the first transformation parameter Tfin
To FPGA module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710203953.1A CN106971385B (en) | 2017-03-30 | 2017-03-30 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710203953.1A CN106971385B (en) | 2017-03-30 | 2017-03-30 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106971385A CN106971385A (en) | 2017-07-21 |
CN106971385B true CN106971385B (en) | 2019-10-01 |
Family
ID=59335488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710203953.1A Active CN106971385B (en) | 2017-03-30 | 2017-03-30 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106971385B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110519498B (en) * | 2019-08-29 | 2021-09-21 | 深圳市道通智能航空技术股份有限公司 | Method and device for calibrating imaging of double-optical camera and double-optical camera |
CN113838106B (en) * | 2020-06-08 | 2024-04-05 | 浙江宇视科技有限公司 | Image registration method and device, electronic equipment and storage medium |
CN115866155B (en) * | 2023-02-27 | 2023-05-16 | 中铁电气化局集团有限公司 | Method and device for processing high-speed rail overhaul data by using fusion algorithm |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101968882A (en) * | 2010-09-21 | 2011-02-09 | 重庆大学 | Multi-source image fusion method |
CN201927079U (en) * | 2011-03-07 | 2011-08-10 | 山东电力研究院 | Rapid real-time integration processing system for visible image and infrared image |
CN106096604A (en) * | 2016-06-02 | 2016-11-09 | 西安电子科技大学昆山创新研究院 | Multi-spectrum fusion detection method based on unmanned platform |
-
2017
- 2017-03-30 CN CN201710203953.1A patent/CN106971385B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101968882A (en) * | 2010-09-21 | 2011-02-09 | 重庆大学 | Multi-source image fusion method |
CN201927079U (en) * | 2011-03-07 | 2011-08-10 | 山东电力研究院 | Rapid real-time integration processing system for visible image and infrared image |
CN106096604A (en) * | 2016-06-02 | 2016-11-09 | 西安电子科技大学昆山创新研究院 | Multi-spectrum fusion detection method based on unmanned platform |
Non-Patent Citations (1)
Title |
---|
Multi-scale Image Registration Based on Strength;Chengcai Leng等;《2010 3rd International Congress on Image and Signal Processing》;20101231;第5卷;第1897-1900页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106971385A (en) | 2017-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062905B (en) | Infrared and visible light fusion method based on saliency map enhancement | |
Jiang et al. | Learning to see moving objects in the dark | |
CN111145131B (en) | Infrared and visible light image fusion method based on multiscale generation type countermeasure network | |
CN111161356B (en) | Infrared and visible light fusion method based on double-layer optimization | |
Bao et al. | Instereo2k: a large real dataset for stereo matching in indoor scenes | |
CN110728671B (en) | Dense reconstruction method of texture-free scene based on vision | |
CN107292921A (en) | A kind of quick three-dimensional reconstructing method based on kinect cameras | |
CN109360240A (en) | A kind of small drone localization method based on binocular vision | |
CN110660088A (en) | Image processing method and device | |
CN106971385B (en) | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device | |
CN111027415B (en) | Vehicle detection method based on polarization image | |
He et al. | Color transfer pulse-coupled neural networks for underwater robotic visual systems | |
CN106530389B (en) | Stereo reconstruction method based on medium-wave infrared facial image | |
Alhatami et al. | Image Fusion Based on Discrete Cosine Transform with High Compression | |
CN114018214A (en) | Marker binocular sub-pixel distance measurement method based on hardware acceleration system | |
CN113902659A (en) | Infrared and visible light fusion method based on significant target enhancement | |
Amamra et al. | Crime scene reconstruction with RGB-D sensors | |
Howells et al. | Depth maps comparisons from monocular images by midas convolutional neural networks and dense prediction transformers | |
CN112396687A (en) | Binocular stereoscopic vision three-dimensional reconstruction system and method based on infrared micro-polarizer array | |
Gaikwad et al. | Food image 3D reconstruction using image processing | |
Liu et al. | Study of Enhanced Multi-spectral Remote-sensing-satellite Image Technology Based on Improved Retinex-Net | |
Zhang et al. | Long Range Imaging Using Multispectral Fusion of RGB and NIR Images | |
Wang | 3D Visualization Enhancement of Art Image based on Virtual Reality Technology | |
San et al. | Early experience of depth estimation on intricate objects using generative adversarial networks | |
CN111010558B (en) | Stumpage depth map generation method based on short video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |