CN105096285A - Image fusion and target tracking system based on multi-core DSP - Google Patents

Image fusion and target tracking system based on multi-core DSP Download PDF

Info

Publication number
CN105096285A
CN105096285A CN201410221719.8A CN201410221719A CN105096285A CN 105096285 A CN105096285 A CN 105096285A CN 201410221719 A CN201410221719 A CN 201410221719A CN 105096285 A CN105096285 A CN 105096285A
Authority
CN
China
Prior art keywords
image
processing module
target
infrared
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410221719.8A
Other languages
Chinese (zh)
Inventor
钱惟贤
余明
廖逸琪
韩鲁
孙爱娟
何蔼玲
顾国华
任侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201410221719.8A priority Critical patent/CN105096285A/en
Publication of CN105096285A publication Critical patent/CN105096285A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention provides an image fusion and target tracking system based on a multi-core DSP. The image fusion and target tracking system includes a centered optical system platform, a communicating module, an image preprocessing module, an image fusion processing module, a target enhancement processing module, a target tracking module, and a computer. The communicating module, the image preprocessing module, the image fusion processing module, the target enhancement processing module, and the target tracking module are realized through software programming in five processors of the DSP. The system can rapidly process images in real time, and feed back a processing result to the computer timely for displaying the processing result in real time.

Description

A kind of image co-registration based on multi-core DSP and Target Tracking System
Technical field
The invention belongs to image processing field, be specifically related to a kind of image co-registration based on multi-core DSP and Target Tracking System.
Background technology
Image co-registration (ImageFusion) refers to that the view data about same target collected by multi-source channel is through image procossing and computer technology etc., extract the advantageous information in each self-channel to greatest extent, the high-quality image of last comprehensive one-tenth.In brief, by algorithm, two width or multiple image are synthesized a width new images exactly, the abundant information of multiple image is merged into a new data centralization, thus makes the confidence level of image higher, scene is abundanter.It is obtained in a lot of fields and applies widely, comprises the treatment and analysis of remote sensing images, identification automatically, computer vision, Medical Image Processing.In military field, image fusion technology is playing vital role in the systems such as precise guidance, autonomous type shell, miniature military robot, battle reconnaissance car and target following.And the tracking technique of moving target is exactly the position determining our interested moving target in each two field picture of video image, it is widely used in the system such as intelligent video monitoring, Vehicle Detection.
Image fusion technology, common are based on pyramidal image interfusion method with based on the little image interfusion method involving improved model.But these methods all lay particular emphasis on software simulating, and general platform such as employing Matlab simulation software and visual c++ etc., intactly do not set forth whole software and hardware system, can not effectively algorithm be implemented on a hardware platform, cannot through engineering approaches and commercialization.In addition, these methods generally realize at the hardware processing platform of Leonardo da Vinci's series, thus cannot real-time follow-up target display-object tracking results.In addition, these methods generally realize based on the hardware processing platform of Leonardo da Vinci's series, although image acquisition and input method comparatively convenient, the requirement of fast processing and parallel processing cannot be adapted to.
Summary of the invention
The present invention proposes a kind of image co-registration based on multi-core DSP and Target Tracking System, can carry out processing in real time and rapidly to image, and result can be fed back on computing machine in time and show in real time.
In order to solve the problems of the technologies described above, the invention provides a kind of image co-registration based on multi-core DSP and Target Tracking System, comprise centered optical system platform, communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module, computing machine; Communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module are realized by software programming respectively in five of DSP processors;
Described centered optical system platform is made up of semi-transparent semi-reflecting lens, Visible-light CCD and individual infrared CCD; Semi-transparent semi-reflecting lens, Visible-light CCD and infrared CCD are in same plane; Semi-transparent semi-reflecting lens is used for the incident light of target scene to be separated into visible ray and infrared light; Visible-light CCD gathers the visible images be separated in target scene; Infrared CCD gathers the infrared image be separated in target scene;
Computing machine is used for the image data transmission of visible images and infrared image Visible-light CCD and infrared CCD gathered to communication module, and computing machine is simultaneously for the result of display-object tracking;
Communication module is used for receiving and storing view data, and by the work of trigger pip trigger image pretreatment module;
Image pre-processing module is used for carrying out bilateral filtering process to visible images and infrared image, removes the noise in image, and by trigger pip trigger image fusion treatment module work;
Image co-registration processing module is used for carrying out image co-registration to visible images and infrared image, and strengthens processing module work by trigger pip trigger target;
Targets improvement processing module is used for the Pseudo Col ored Image that the image after to fusion carries out by stages mapping color, strengthens the target in image, and strengthens processing module work by trigger pip trigger target;
Target tracking module is used for marking moving target in the picture, obtains target following result, and sends to computing machine to show target following result by trigger pip triggered communication module.
The present invention compared with prior art, its remarkable advantage is: (1) utilizes coaxial light path platform to produce visible images and infrared image, and this coaxial optical platform building is convenient, and volume is little, light utilization efficiency is high, the visible images obtained and the picture quality of infrared image higher, (2) multinuclear of DSP is utilized to configure, the image co-registration of visible images and infrared image and target following treatment scheme are divided into some sub-processes, by intercore communication and cooperation, improve the efficiency of whole processing procedure, in addition, utilize the distinctive four times of tired multiply accumulating abilities of this DSP and inner concurrent section, the image co-registration of visible images and infrared image and target following handling procedure are optimized operation and the stream line operation of multiple data stream, thus save the plenty of time, greatly enhance the speed of image procossing, and screen can be returned in real time, be convenient to carry out the process such as picture quality examination and controlling.
Accompanying drawing explanation
Fig. 1 is the image co-registration and the Target Tracking System workflow diagram that the present invention is based on multi-core DSP.
Fig. 2 is centered optical system platform schematic diagram of the present invention.
Fig. 3 is the size that in the present invention, image co-registration processing module uses is window neighborhood during 5*5.
Fig. 4 is when using the inventive method to test, the visible images collected and infrared image.
Fig. 5 is when using the inventive method to test, the image after merging the visible images in Fig. 4 and infrared image.
Fig. 6, when using the inventive method to test, carries out Pseudo Col ored Image to Fig. 5, obtains the image after Pseudo Col ored Image.
Fig. 7, when using the inventive method to test, carries out target following process to Fig. 6, marks the tracking results figure of target by white box.
Embodiment
Visible images is reflected image, and radio-frequency component is many, can reflect the details of scene under certain illumination, but the contrast of visible images (twilight image) when illumination is not good is lower; Infrared image is radiation image, and gray scale is determined by the temperature difference of object and background, can not reflect real scene.Be used alone visible ray or the equal Shortcomings part of infrared image, for these two kinds, there is complementary image, image fusion technology can be effectively comprehensive and excavate their characteristic information, enhanced scene is understood, outstanding target, be conducive to when hiding, camouflage and fascination sooner, the detection of a target more accurately.The present invention proposes a kind of image co-registration based on multi-core DSP and Target Tracking System for this reason.
System's composition
A kind of image co-registration based on multi-core DSP of the present invention and Target Tracking System, its formation comprises centered optical system platform, communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module, computing machine; Communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module are realized by software programming respectively in five of DSP processors, share out the work and help one another between five processors, intercom mutually, finally jointly complete image co-registration and target following process;
As shown in Figure 2, described centered optical system platform is made up of semi-transparent semi-reflecting lens, Visible-light CCD (imageing sensor) and individual infrared light CCD (imageing sensor); Semi-transparent semi-reflecting lens, Visible-light CCD and infrared light CCD are in same plane; Semi-transparent semi-reflecting lens is used for the light incident light of target scene being separated into two wave bands, is namely separated into visible ray and infrared light; Visible-light CCD gathers the visible images be separated in target scene; Infrared CCD gathers the infrared image be separated in target scene.Aforementioned centered optical system is compared with parallel light path, and system bulk is little, build conveniently, and light utilization efficiency is also higher.In addition, adopt parallel light path, the observation for close-in target there will be the problem such as image offset, distortion, and coaxial optical light path, the difference of observing far and near target is equivalent to the light of two kinds of wave bands that same incident light is separated, so can reduce much than parallel light path spstem.
Computing machine is used for the image data transmission of visible images and infrared image Visible-light CCD and infrared light CCD gathered to communication module, and the result that display-object is followed the tracks of;
Communication module is used for receiving and storing view data, and by the work of trigger pip trigger image pretreatment module;
Image pre-processing module is used for carrying out bilateral filtering process to visible images and infrared image, removes the noise in image, and by trigger pip trigger image fusion treatment module work;
Image co-registration processing module is used for carrying out image co-registration to visible images and infrared image, and strengthens processing module work by trigger pip trigger target;
Targets improvement processing module is used for the Pseudo Col ored Image that the image after to fusion carries out by stages mapping color, strengthens the target in image, and strengthens processing module work by trigger pip trigger target;
Target tracking module is used for marking moving target in the picture, obtains target following result, and sends to computing machine to show target following result by trigger pip triggered communication module.
The course of work
As shown in Figure 1, when using present system to follow the tracks of the target in scene, the incident light of target scene is to incide on semi-transparent semi-reflecting lens with the mode of semi-transparent semi-reflecting lens angle at 45 °, incident light is separated into the light of two wave bands by semi-transparent semi-reflecting lens, wherein, visible light part in incident light penetrates through after semi-transparent semi-reflecting lens, the injection after semi-transparent semi-reflecting lens reflection of infrared portions in incident light, obtain the visible images of target scene with the visible ray of Visible-light CCD acquisition of transmission, obtain the infrared image of target scene with the infrared light of infrared CCD collection reflection;
The view data of visible images and infrared image is sent to communication module by computing machine;
Communication module carries out view data storage after receiving view data, when to receive and after storing frame visible images data and a frame infrared picture data, communication module sends trigger pip to image pre-processing module;
After the trigger pip that the communication module that receives image pre-processing module transmits, the view data stored in calling communication module, respectively bilateral filtering process is carried out to visible images and infrared image, remove picture noise, and the view data of preserving after bilateral filtering process, then send trigger pip to image co-registration processing module;
After image co-registration processing module receives the trigger pip of image pre-processing module transmission, call the view data after the bilateral filtering process of image pre-processing module storage, to visible images and infrared image, first carry out image gold tower dividing processing respectively, after carry out based on laplacian pyramid fusion treatment, obtain the image after visible images and infrared image fusion, and preserve the data of fused images, then send trigger pip to targets improvement processing module;
After targets improvement processing module receives the trigger pip of image co-registration processing module transmission, calling graph is as the fusion image data of fusion treatment module stores, fused images is carried out to the Pseudo Col ored Image of by stages mapping color, obtain and preserve pseudo color image, strengthen the target in fused images, then send trigger pip to target tracking module;
After the trigger pip that the processing module that strengthens target tracking module receiving target transmits, invocation target strengthens the pseudo color image that processing module stores, in pseudo color image, moving target mark is marked, red square frame frame is such as used to go out, make target more outstanding, complete target following process, obtain and preserve target following result; Then trigger pip is sent to communication module;
After communication module receives the trigger pip of target tracking module transmission, the tracking results that invocation target tracking module is preserved, sends it back computing machine by target following result, and by the display of target following result on computers.
The preferred implementation of some concrete technological means in the present invention:
1, the matrix φ of form as shown in formula (1) of image data transmission between Computers and Communication module,
φ = I ( 0,0 ) I ( 0,1 ) · · · I ( 0 , Width - 1 ) I ( 1,0 ) I ( 1,1 ) · · · I ( 1 , Width - 1 ) · · · · · · · · I ( x , y ) · · · · · · · I ( Height - 1,0 ) I ( Height - 1 , 1 ) · · · I ( Height - 1 , Width - 1 ) - - - ( 1 )
In formula (1), Width and Height represents width and the height of visible images or infrared image, and I (x, y) represents pixel value corresponding to pixel (x, y).
2, in image co-registration processing module, identical with the method that infrared picture data carries out bilateral filtering to visible images, specific implementation step is:
The 2.1 window neighborhoods choosing N*N size, the Gaussian function standard deviation sigma in calculation window dwith luminance standard difference σ r;
2.2 two the pixel (is different according to position in visible images or infrared image, j) with (k, l) position relationship and pixel relationship, calculate field of definition core d (i, j, k, l) with codomain core r (i, j, k, l), account form is as shown in formula (2):
d ( i , j , k , l ) = exp ( - ( i - k ) 2 + ( j - l ) 2 2 σ d 2 ) r ( i , j , k , l ) = exp ( - | | I ( i , j ) - I ( k , l ) | | 2 2 σ r 2 ) - - - ( 2 )
2.3 calculate weight coefficient w (i, j, k, l), and account form is as shown in formula (3):
w ( i , j , k , l ) = d ( i , j , k , l ) * r ( i , j , k , l ) = exp ( ( i - k ) 2 + ( j - l ) 2 2 σ d 2 - | | I ( i , j ) - I ( k , l ) | | 2 2 σ r 2 ) - - - ( 3 )
2.4 calculate the visible images after bilateral filtering or the pixel value h (i, j) of infrared image at point (i, j) place, and account form is as shown in formula (4):
h ( i , j ) = Σ k , l I ( k , l ) w ( i , j , k , l ) Σ k , l w ( i , j , k , l ) - - - ( 4 )
Above-mentionedly bilateral filtering is adopted to carry out Image semantic classification respectively to visible images and infrared image, can while removal noise, keep the marginal information of image better, because two-sided filter is by d (i, j, k, l) with r (i, j, k, l) two functions are formed, wherein function d (i, j, k, l) filter coefficient is determined by geometric space distance, function r (i, j, k, l) determine filter coefficient by pixel value difference, consider positional information and Pixel Information simultaneously.
3, in image co-registration processing module, described identical with the method that infrared image carries out laplacian pyramid segmentation to visible images respectively, detailed process is:
Visible images after bilateral filtering or infrared image are labeled as original image G by 3.1 0, to original image G 0carry out the down-sampling process of interlacing every row, obtain ground floor image G 1, then to ground floor image G 1carry out same interlacing every the process of row down-sampling, obtain second layer image G 2, the like, obtain l tomographic image G l, its expression formula is such as formula shown in (5):
G l ( i , j ) = &Sigma; m = - 2 2 &Sigma; n = - 2 2 &omega; ( m , n ) * G l - 1 ( 2 i + m , 2 j + n ) ( 1 &le; l &le; N , 0 &le; i < R l , 0 &le; j < G l ) - - - ( 5 )
By whole each tomographic image G in formula (5) l, (0≤l≤N) forms gaussian pyramid.In formula (5), N is the level number of gaussian pyramid top layer, R land C lbe respectively line number and the columns of gaussian pyramid l layer, ω (m, n) is the window function of a separable 5*5 of two dimension, (m, n) be position corresponding in the window of 5*5, m and n value is [-2 ,-1,0,1,2], the expression formula of ω (m, n) is as shown in the formula shown in (6):
&omega; = 1 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1 - - - ( 6 )
Each tomographic image G of 3.2 pairs of Gauss's quintars lcarry out interpolation process, obtain the image G after interpolation process l *, computation process such as formula shown in (7), image G l *size and image G l-1measure-alike, be namely similar to and each tomographic image of gaussian pyramid increased four times.
G l * = 4 &Sigma; m = - 2 2 &Sigma; n = - 2 2 &omega; ( m , n ) * G l ( i + m 2 , j + n 2 ) ( 1 < l &le; N , 0 &le; i < R l , 0 &le; j < C l ) - - - ( 7 )
(3) make L l = G l - G * l + 1 , 0 &le; l < N L N = G N , l = N - - - ( 8 )
Whole L in formula (8) l, (0≤l≤N) constitutes laplacian pyramid, and wherein, N is the level number on laplacian pyramid top; L lit is the l tomographic image of Laplacian pyramid.
Thus, obtain the N layer Laplacian-pyramid image of visible images and infrared image, visible images is identical with the size correspondence of infrared image every layer Laplacian-pyramid image.
Described process of carrying out based on the fusion treatment of laplacian pyramid is: contrast the pixel value in visible images laplacian pyramid and infrared image laplacian pyramid every layer of correspondence image correspondingly-sized, get the pixel of the large pixel of pixel value as correspondence position on the composograph laplacian pyramid after fusion, fusion rule is such as formula shown in (9):
In formula (9), L 1lfor the visible images after decomposition, L 2lfor the infrared image after decomposition, L 0fbe visible images and infrared image merge after image.Image after fusion, concerning former visible images and infrared image, contains more image information, makes the information of target and background more comprehensively and abundant.
4, in targets improvement processing module, fused images is carried out to the Pseudo Col ored Image of by stages mapping color, the concrete operation step obtaining the image after Pseudo Col ored Image is:
4.1 fused images L 0ffor gray level image, then fused images L 0fon the pixel value of some correspondence also become gray-scale value, and the scope of gray-scale value is between [0,255], and wherein, the color of gray-scale value 0 correspondence is white, and the color of gray-scale value 255 correspondence is black.By fused images L 0fpixel value divided by 255, then the gray-scale value of fused images is compressed between [0,1], fused images L 0fgray-scale value after overcompression, by fused images L 0fg (x, y) is designated as at the pixel value at pixel (x, y) place;
Fused images pixel value g (x, y) by stages after 4.2 pairs of compressions maps different colours, and colored is red, green by R, G, B and blue three primary colours form by different ratio combination, and different proportion represents different colored, and concrete map operation is:
When g (i, j)≤0.25, R, G, B tri-color component values are respectively as shown in formula (10):
R ( i , j ) = 0 ; G ( i , j ) = 0 ; B ( i , j ) = 4 * g ( i , j ) ; - - - ( 10 )
As 0.25 < g (i, j) & & g (i, j) when 0.375, R, G, B tri-color component values are respectively as shown in formula (11):
R ( i , j ) = 4 * ( g ( i , j ) - 0.25 ) ; G ( i , j ) = 0 ; B ( i , j ) = 1 ; - - - ( 11 )
As 0.375 < g (i, j) & & g (i, j) when 0.5, R, G, B tri-color component values are respectively as shown in formula (12):
R ( i , j ) = 4 * ( g ( i , j ) - 0.25 ) ; G ( i , j ) = 0 ; B ( i , j ) = 1 - 8 * ( g ( i , j ) - 0.375 ) ; - - - ( 12 )
When g (i, j)=0.5, make R, G, B tri-color component values respectively as shown in formula (13):
R ( i , j ) = 1 ; G ( i , j ) = 0 ; B ( i , j ) = 0 ; - - - ( 13 )
As 0.5 < g (i, j) & & g (i, j) when 0.75, R, G, B tri-color component values are respectively as shown in formula (14):
R ( i , j ) = 1 ; G ( i , j ) = 4 * ( g ( i , j ) - 0.5 ) ; B ( i , j ) = 0 ; - - - ( 14 )
As 0.75 < g (i, j) & & g (i, j) when 1, R, G, B tri-color component values are respectively as shown in formula (15):
R ( i , j ) = 1 ; G ( i , j ) = 1 ; B ( i , j ) = 4 * ( g ( i , j ) - 0.75 ) ; - - - ( 15 )
4.3, by the triple channel R of each pixel, G, B value expansion 255 times, just obtain fused images L 0fpixel value after Pseudo Col ored Image.
Pseudo Col ored Image is carried out to fused images, target can be made more outstanding in the picture, meanwhile, whenever necessary, also can obtain and make background distincter.
In order to further illustrate beneficial effect of the present invention, the concrete experiment that utilized real image to carry out.
1, centered optical system platform is built in outdoor, axial light path is as shown in Figure 2 together for centered optical system platform, for the scene that outdoor is chosen arbitrarily, the light that this scene sends collects visible images after centered optical system platform in Visible-light CCD, infrared image is collected in infrared CCD, the visible images collected and infrared image, respectively as shown in (a) and (b) in Fig. 4.
2, first, to visible images Fig. 4 (a) obtained and infrared image Fig. 4 (b), choose N*N as shown in Figure 3, the neighborhood window of N=5 size, carry out bilateral filtering process, obtain filtered visible images data and infrared picture data.
Secondly, laplacian pyramid segmentation and image co-registration process are carried out to filtered visible images and infrared image, obtain the image after merging, as shown in Figure 5., evaluate fused images, evaluation result is as shown in table 1 meanwhile.
Then, Pseudo Col ored Image is carried out to fused images Fig. 5, obtain the image after Pseudo Col ored Image, as shown in Figure 6.
Finally, target following process is carried out to the image graph 6 after pseudo-colours, marks target by white box, obtain the result of target following, as shown in Figure 7, and view data complete for target following is sent it back computer terminal show.
3, after completing experimental implementation, analysis analysis result carries out to experimental data as follows:
A, visible images Fig. 4 (a) and infrared image Fig. 4 (b) are after merging, and compared with visible images Fig. 4 (a) the fused images Fig. 5 obtained and Visible-light CCD collected, fused images background is abundanter; Compared with infrared image Fig. 4 (a) collect fused images and infrared CCD, the target information in fused images is more comprehensive.
In order to evaluate the quality of fused images further, the present invention adopts entropy, spatial frequency and average gradient to evaluate fused images, and every data are as shown in table 1,
The data statistics of table 1 image co-registration
Image Entropy Spatial frequency Average gradient
Visible images 7.2173 10.3021 5.762
Infrared image 6.5218 2.4935 2.0154
Fused images 7.8147 10.9733 6.2683
As can be seen from the experimental data of table 1, after image co-registration, no matter the fused images obtained is compared with the visible images that collects with Visible-light CCD or the infrared image collected with infrared CCD, the entropy of fused images, spatial frequency values and average gradient value are obtained for certain lifting, fused images is more than the information comprised in the visible images before fusion and infrared image, more comprehensively, and image is also more clear.
B, comparison diagram 5 are known with Fig. 6, and after Pseudo Col ored Image, the coloured image Fig. 6 after Pseudo Col ored Image is more clear than greyish white fused images Fig. 5 target and background.
C, comparison diagram 6 and Fig. 7 are known, and the image graph 7 after target following is compared with pseudo color image Fig. 6, and target is distincter outstanding, can ensure while seeing background clearly, identify target better.

Claims (6)

1. based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, comprise centered optical system platform, communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module, computing machine; Communication module, image pre-processing module, image co-registration processing module, targets improvement processing module and target tracking module are realized by software programming respectively in five of DSP processors;
Described centered optical system platform is made up of semi-transparent semi-reflecting lens, Visible-light CCD and individual infrared CCD; Semi-transparent semi-reflecting lens, Visible-light CCD and infrared CCD are in same plane; Semi-transparent semi-reflecting lens is used for the incident light of target scene to be separated into visible ray and infrared light; Visible-light CCD gathers the visible images be separated in target scene; Infrared CCD gathers the infrared image be separated in target scene;
Computing machine is used for the image data transmission of visible images and infrared image Visible-light CCD and infrared CCD gathered to communication module, and computing machine is simultaneously for the result of display-object tracking;
Communication module is used for receiving and storing view data, and by the work of trigger pip trigger image pretreatment module;
Image pre-processing module is used for carrying out bilateral filtering process to visible images and infrared image, removes the noise in image, and by trigger pip trigger image fusion treatment module work;
Image co-registration processing module is used for carrying out image co-registration to visible images and infrared image, and strengthens processing module work by trigger pip trigger target;
Targets improvement processing module is used for the Pseudo Col ored Image that the image after to fusion carries out by stages mapping color, strengthens the target in image, and strengthens processing module work by trigger pip trigger target;
Target tracking module is used for marking moving target in the picture, obtains target following result, and sends to computing machine to show target following result by trigger pip triggered communication module.
2. as claimed in claim 1 based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, the incident light of target scene is to incide on semi-transparent semi-reflecting lens with the mode of semi-transparent semi-reflecting lens angle at 45 °, visible light part in incident light penetrates through after semi-transparent semi-reflecting lens, the injection after semi-transparent semi-reflecting lens reflection of infrared portions in incident light, obtain the visible images of target scene with the visible ray of Visible-light CCD acquisition of transmission, obtain the infrared image of target scene with the infrared light of infrared CCD collection reflection;
Visible-light CCD and the visible images of infrared CCD collection and the view data of infrared image are sent to communication module by computing machine;
Communication module carries out view data storage after receiving view data, when to receive and after storing frame visible images data and a frame infrared picture data, communication module sends trigger pip to image pre-processing module;
After the trigger pip that the communication module that receives image pre-processing module transmits, the view data stored in calling communication module, respectively bilateral filtering process is carried out to visible images and infrared image, remove picture noise, and the view data of preserving after bilateral filtering process, then send trigger pip to image co-registration processing module;
After image co-registration processing module receives the trigger pip of image pre-processing module transmission, call the view data after the bilateral filtering process of image pre-processing module storage, to visible images and infrared image, first carry out image gold tower dividing processing respectively, after carry out based on laplacian pyramid fusion treatment, obtain the image after visible images and infrared image fusion, and preserve the data of fused images, then send trigger pip to targets improvement processing module;
After targets improvement processing module receives the trigger pip of image co-registration processing module transmission, calling graph is as the fusion image data of fusion treatment module stores, fused images is carried out to the Pseudo Col ored Image of by stages mapping color, obtain and preserve pseudo color image, strengthen the target in fused images, then send trigger pip to target tracking module;
After the trigger pip that the processing module that strengthens target tracking module receiving target transmits, invocation target strengthens the pseudo color image that processing module stores, and is marked by moving target mark, obtain and preserve target following result in pseudo color image; Then trigger pip is sent to communication module;
After communication module receives the trigger pip of target tracking module transmission, the tracking results that invocation target tracking module is preserved, sends it back computing machine by target following result.
3., as claimed in claim 1 based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, the matrix φ of form as shown in formula (1) of image data transmission between Computers and Communication module,
In formula (1), Width and Height represents width and the height of visible images or infrared image, and I (x, y) represents pixel value corresponding to pixel (x, y).
4., as claimed in claim 1 based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, in image co-registration processing module, identical with the method that infrared picture data carries out bilateral filtering to visible images, specific implementation step is:
The 4.1 window neighborhoods choosing N*N size, the Gaussian function standard deviation sigma in calculation window dwith luminance standard difference σ r;
4.2 calculate field of definition core d (i, j, k, l) and codomain core r (i, j, k, l), and account form is as shown in formula (2):
4.3 calculate weight coefficient w (i, j, k, l), and account form is as shown in formula (3):
4.4 calculate the visible images after bilateral filtering or the pixel value h (i, j) of infrared image at point (i, j) place, and account form is as shown in formula (4):
5. as claimed in claim 1 based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, in image co-registration processing module, described identical with the method that infrared image carries out laplacian pyramid segmentation to visible images respectively, detailed process is:
Visible images after bilateral filtering or infrared image are labeled as original image G by 5.1 0, to original image G 0carry out the down-sampling process of interlacing every row, obtain ground floor image G 1, then to ground floor image G 1carry out same interlacing every the process of row down-sampling, obtain second layer image G 2, the like obtain l tomographic image G l, its expression formula is such as formula shown in (5):
In formula (5), N is the level number of gaussian pyramid top layer, R land C lbe respectively line number and the columns of gaussian pyramid l layer, ω (m, n) is the window function of a separable 5*5 of two dimension, (m, n) be position corresponding in the window of 5*5, m and n value is [-2 ,-1,0,1,2], the expression formula of ω (m, n) is as shown in the formula shown in (6):
Each tomographic image G of 5.2 pairs of Gauss's quintars lcarry out interpolation process, obtain the image G after interpolation process l *, computation process is such as formula shown in (7)
5.3 order
Whole L in formula (8) l, (0≤l≤N) constitutes laplacian pyramid, and wherein, N is the level number on laplacian pyramid top; L lit is the l tomographic image of Laplacian pyramid.
Described process of carrying out based on the fusion treatment of laplacian pyramid is: contrast the pixel value in visible images laplacian pyramid and infrared image laplacian pyramid every layer of correspondence image correspondingly-sized, get the pixel of the large pixel of pixel value as correspondence position on fused image laplacian pyramid, fusion rule is such as formula shown in (9):
In formula (9), L 1lfor the visible images after decomposition, L 2lfor the infrared image after decomposition, L 0fbe visible images and infrared image merge after image.
6. as claimed in claim 1 based on image co-registration and the Target Tracking System of multi-core DSP, it is characterized in that, in targets improvement processing module, the concrete operation step obtaining the image after Pseudo Col ored Image is:
6.1 fused images L 0fpixel value divided by 255, the gray-scale value of fused images is compressed between interval [0,1], and note fused images L 0fpixel value at pixel (x, y) place is g (x, y);
6.2 pairs of pixel value g (x, y) by stages map different colours, and colour is formed by different ratio combination by R, G, B three primary colours, and concrete map operation is:
When g (i, j)≤0.25, R, G, B tri-color component values are respectively as shown in formula (10):
As 0.25<g (i, j) & & g (i, j) <=0.375, R, G, B tri-color component values are respectively as shown in formula (11):
As 0.375<g (i, j) & & g (i, j) <=0.5, R, G, B tri-color component values are respectively as shown in formula (12):
When g (i, j)=0.5, make R, G, B tri-color component values respectively as shown in formula (13):
As 0.5<g (i, j) & & g (i, j) <=0.75, R, G, B tri-color component values are respectively as shown in formula (14):
As 0.75<g (i, j) & & g (i, j) <=1, R, G, B tri-color component values are respectively as shown in formula (15):
6.3 by the triple channel R of each pixel, G, B value expansion 255 times, obtains fused images L 0fpixel value after Pseudo Col ored Image.
CN201410221719.8A 2014-05-23 2014-05-23 Image fusion and target tracking system based on multi-core DSP Pending CN105096285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410221719.8A CN105096285A (en) 2014-05-23 2014-05-23 Image fusion and target tracking system based on multi-core DSP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410221719.8A CN105096285A (en) 2014-05-23 2014-05-23 Image fusion and target tracking system based on multi-core DSP

Publications (1)

Publication Number Publication Date
CN105096285A true CN105096285A (en) 2015-11-25

Family

ID=54576633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410221719.8A Pending CN105096285A (en) 2014-05-23 2014-05-23 Image fusion and target tracking system based on multi-core DSP

Country Status (1)

Country Link
CN (1) CN105096285A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847703A (en) * 2016-03-28 2016-08-10 联想(北京)有限公司 Image processing method and electronic device
CN106303296A (en) * 2016-08-30 2017-01-04 许昌学院 A kind of image mosaic emerging system
CN106960202A (en) * 2017-04-11 2017-07-18 广西师范大学 A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN107358583A (en) * 2017-06-28 2017-11-17 深圳森阳环保材料科技有限公司 A kind of good monitoring system of monitoring performance
CN108040243A (en) * 2017-12-04 2018-05-15 南京航空航天大学 Multispectral 3-D visual endoscope device and image interfusion method
CN108520529A (en) * 2018-03-30 2018-09-11 上海交通大学 Visible light based on convolutional neural networks and infrared video method for tracking target
CN111696024A (en) * 2020-06-02 2020-09-22 武汉华景康光电科技有限公司 FPGA-based infrared image mirroring method and device
CN112750095A (en) * 2021-01-04 2021-05-04 悉地(苏州)勘察设计顾问有限公司 Infrared and visible light fusion method, traffic monitoring device and storage medium
CN112887513A (en) * 2019-11-13 2021-06-01 杭州海康威视数字技术股份有限公司 Image noise reduction method and camera
CN112907624A (en) * 2021-01-27 2021-06-04 湖北航天技术研究院总体设计所 Target positioning and tracking method and system based on multi-band information fusion
CN116543284A (en) * 2023-07-06 2023-08-04 国科天成科技股份有限公司 Visible light and infrared double-light fusion method and system based on scene class

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714251A (en) * 2009-12-22 2010-05-26 上海电力学院 Infrared and visual pseudo-color image fusion and enhancement method
CN201927079U (en) * 2011-03-07 2011-08-10 山东电力研究院 Rapid real-time integration processing system for visible image and infrared image
CN102789640A (en) * 2012-07-16 2012-11-21 中国科学院自动化研究所 Method for fusing visible light full-color image and infrared remote sensing image
CN103177455A (en) * 2013-03-20 2013-06-26 南京理工大学 Method for realizing KLT (Karhunen Loeve Transform) moving target tracking algorithm based on multicore DSP (Digital Signal Processor)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714251A (en) * 2009-12-22 2010-05-26 上海电力学院 Infrared and visual pseudo-color image fusion and enhancement method
CN201927079U (en) * 2011-03-07 2011-08-10 山东电力研究院 Rapid real-time integration processing system for visible image and infrared image
CN102789640A (en) * 2012-07-16 2012-11-21 中国科学院自动化研究所 Method for fusing visible light full-color image and infrared remote sensing image
CN103177455A (en) * 2013-03-20 2013-06-26 南京理工大学 Method for realizing KLT (Karhunen Loeve Transform) moving target tracking algorithm based on multicore DSP (Digital Signal Processor)

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QIAN WEIXIAN, BAI LIANFA, GU GUOHUA, ZHANG BAOMIN: "The real-time dual band image fusion system with improved gray modulating fusion algorithm", 《ELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY IV》 *
徐萌兮,钱惟贤,顾国华,任建乐,龚振飞: "共轴光学系统下的红外与可见光图像融合与彩色化", 《激光与光电子学进展》 *
曹瑛,李志永,卢晓鹏,邹谋炎: "基于自适应邻域双边滤波的点目标检测预处理算法", 《电子与信息学报》 *
胡楷,钱惟贤,陈钱,顾国华,任建乐: "基于TMS320C6678 的KLT 跟踪算法的改进与实现", 《激光与光电子学进展》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847703A (en) * 2016-03-28 2016-08-10 联想(北京)有限公司 Image processing method and electronic device
CN105847703B (en) * 2016-03-28 2019-04-26 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN106303296A (en) * 2016-08-30 2017-01-04 许昌学院 A kind of image mosaic emerging system
CN106960202A (en) * 2017-04-11 2017-07-18 广西师范大学 A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN106960202B (en) * 2017-04-11 2020-05-19 湖南灵想科技股份有限公司 Smiling face identification method based on visible light and infrared image fusion
CN107358583A (en) * 2017-06-28 2017-11-17 深圳森阳环保材料科技有限公司 A kind of good monitoring system of monitoring performance
CN108040243A (en) * 2017-12-04 2018-05-15 南京航空航天大学 Multispectral 3-D visual endoscope device and image interfusion method
CN108520529A (en) * 2018-03-30 2018-09-11 上海交通大学 Visible light based on convolutional neural networks and infrared video method for tracking target
CN112887513B (en) * 2019-11-13 2022-08-30 杭州海康威视数字技术股份有限公司 Image noise reduction method and camera
CN112887513A (en) * 2019-11-13 2021-06-01 杭州海康威视数字技术股份有限公司 Image noise reduction method and camera
CN111696024A (en) * 2020-06-02 2020-09-22 武汉华景康光电科技有限公司 FPGA-based infrared image mirroring method and device
CN111696024B (en) * 2020-06-02 2023-04-25 武汉华景康光电科技有限公司 Infrared image mirroring method and device based on FPGA
CN112750095A (en) * 2021-01-04 2021-05-04 悉地(苏州)勘察设计顾问有限公司 Infrared and visible light fusion method, traffic monitoring device and storage medium
CN112907624A (en) * 2021-01-27 2021-06-04 湖北航天技术研究院总体设计所 Target positioning and tracking method and system based on multi-band information fusion
CN116543284A (en) * 2023-07-06 2023-08-04 国科天成科技股份有限公司 Visible light and infrared double-light fusion method and system based on scene class
CN116543284B (en) * 2023-07-06 2023-09-12 国科天成科技股份有限公司 Visible light and infrared double-light fusion method and system based on scene class

Similar Documents

Publication Publication Date Title
CN105096285A (en) Image fusion and target tracking system based on multi-core DSP
Yang et al. Visual perception enabled industry intelligence: state of the art, challenges and prospects
CN103927741B (en) SAR image synthesis method for enhancing target characteristics
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
US20190197344A1 (en) Saliency-based method for extracting road target from night vision infrared image
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN101443817B (en) Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
CN108053449A (en) Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system
CN105550685B (en) The large format remote sensing image area-of-interest exacting method of view-based access control model attention mechanism
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
CN102521859B (en) Reality augmenting method and device on basis of artificial targets
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN104361574B (en) No-reference color image quality assessment method on basis of sparse representation
CN102999892A (en) Intelligent fusion method for depth images based on area shades and red green blue (RGB) images
CN105184779A (en) Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
CN106875419A (en) Small and weak tracking of maneuvering target based on NCC matching frame differences loses weight detecting method
CN106446936A (en) Hyperspectral data classification method for spectral-spatial combined data and oscillogram conversion based on convolution neural network
CN104318583B (en) Visible light broadband spectrum image registration method
CN105096286A (en) Method and device for fusing remote sensing image
CN112907573B (en) Depth completion method based on 3D convolution
CN105513060A (en) Visual perception enlightening high-resolution remote-sensing image segmentation method
CN205726180U (en) Terminal guidance video image three dimensional data collection system
CN107590782A (en) A kind of spissatus minimizing technology of high-resolution optical image based on full convolutional network
CN105139370B (en) A kind of Real-time image fusion method based on visible ray Yu Near-infrared Double wave band camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151125

RJ01 Rejection of invention patent application after publication