CN113470055A - Image fusion processing method based on FPGA acceleration - Google Patents

Image fusion processing method based on FPGA acceleration Download PDF

Info

Publication number
CN113470055A
CN113470055A CN202110804750.4A CN202110804750A CN113470055A CN 113470055 A CN113470055 A CN 113470055A CN 202110804750 A CN202110804750 A CN 202110804750A CN 113470055 A CN113470055 A CN 113470055A
Authority
CN
China
Prior art keywords
video image
image signal
pixel
fusion processing
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110804750.4A
Other languages
Chinese (zh)
Inventor
王金岑
郭晓丹
路家琪
徐彰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110804750.4A priority Critical patent/CN113470055A/en
Publication of CN113470055A publication Critical patent/CN113470055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion processing method based on FPGA acceleration, which is characterized by comprising the following steps of: the method comprises the following steps: video image signal reception: receiving a composite visual video image signal generated by a user board and an infrared video image signal input by a night vision system; step two: video image fusion processing: carrying out image fusion processing on the synthesized visual video image signal and the infrared video image signal to form an enhanced synthesized visual video image signal; step three: video image signal display control and output: and selecting, outputting and displaying the composite view video image signal and the enhanced composite view video image signal. The invention not only solves the defect of the customized circuit, but also overcomes the defect of limited gate circuit number of the original programmable device, and greatly improves the speed and the processing effect of image fusion processing.

Description

Image fusion processing method based on FPGA acceleration
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image fusion processing method based on FPGA (field programmable gate array) acceleration.
Background
The image fusion refers to that image data which are collected by a multi-source channel and related to the same target are subjected to image processing, computer technology and the like, beneficial information in respective channels is extracted to the maximum extent, and finally high-quality images are synthesized, so that the utilization rate of image information is improved, the computer interpretation precision and reliability are improved, the spatial resolution and the spectral resolution of original images are improved, and monitoring is facilitated. The existing image fusion processing technology has low processing efficiency and poor processing effect, and seriously influences the fusion effect of images.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image fusion processing method based on FPGA acceleration aiming at the defects of the prior art, which not only solves the defects of a customized circuit, but also overcomes the defect of limited gate circuits of the original programmable device, and greatly improves the speed and the processing effect of image fusion processing.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
an image fusion processing method based on FPGA acceleration is characterized by comprising the following steps:
the method is characterized by comprising the following steps:
the method comprises the following steps: video image signal reception: receiving a composite visual video image signal generated by a user board and an infrared video image signal input by a night vision system;
step two: video image fusion processing: carrying out image fusion processing on the synthesized visual video image signal and the infrared video image signal to form an enhanced synthesized visual video image signal;
step three: video image signal display control and output: and selecting, outputting and displaying the composite view video image signal and the enhanced composite view video image signal.
The video image fusion processing in the second step includes the following steps:
the first step is as follows: respectively processing the input infrared video image signal and the vision video image signal by adopting two analysis sub-modules;
the second step is that: collecting all analysis results from two analysis submodules input from the infrared video image signal and the visual video image signal, comparing the results of the two modules, and creating a weight for each pixel between the two video streams;
the third step: two pixels at the same position in the two video streams will be combined by the merging module according to the following equation:
Pixel_merge=w_visible*pixel_visible+w_inferred*pixel_inferred
Where w_visible+w_inferred=1.0
0<=pixel_merge<=255
the weights are chosen so that the best visual result from the two input streams is selected for the merged output.
The analysis submodule processing in the first step includes the following steps:
step a): receiving RGB signals of 8-bit input video streams in real time in parallel, and storing the signals into a DDR memory;
step b): the DDR data frame acquisition module is used for capturing a frame stored in the DDR memory, and meanwhile, the DDR front frame acquisition module is used for acquiring previous frame data from the DDR memory so as to calculate the motion difference between the previous frame and the current frame through motion search, and therefore, the position of the current video in a map can be better determined through the motion of a pixel level;
step c): frame data captured by the DDR data frame acquisition module is processed by the time filter module, the motion search module and the edge detection module and then output to the DDR memory for storage, and frame data acquired by the DDR preamble frame acquisition module is processed by the motion search module and then output to the DDR memory for storage.
The motion search module in step b) searches the motion of each pixel according to the previous frame image through the motion search engine, and the motion search engine finds the motion vector according to the following procedures:
Figure BDA0003165937260000021
the temporal filter module in step c) above can extract relevant key information from noise and distortion aspects by this module. The temporal filter is defined as follows:
pixel_out(x,y)=pixel_temp(x,y)*w(x,y)+pixel_in(x,y)*(1-w(x,y))
where w (x, y) is the weight of each pixel. w is dynamic and is a function of the output of the motion search engine. And the motion search result can be read from the DDR through the DDR read-write interface.
The edge detection module in the step c) extracts edge detection features and optimizes and selects the edge detection features. The RGB color space is processed to generate directional color difference results. The following formula is used for conversion:
Pixel(i,j)=2*red(i,j)+3*green(i,j)+4*blue(i,j)
the directional color difference was calculated using four masks, which are schematically shown as follows:
0 0 0 0 4 0 4 0 0 0 0 -4
4 0 -4 0 0 0 0 0 0 0 0 0
0 0 0 0 -4 0 0 0 -4 4 0 0
the mathematical formula for the whole mask process is as follows:
u1(i,j)=[2*R(i+1,j)+3*G(i+1,j)+4*B(i+1,j)]*A(i+1,j)
v1(i,j)=[2*R(i+1,j+2)+3*G(i+1,j+2)+4*B(i+1,j+2)]*A(i+1,j+2)
Figure BDA0003165937260000031
u2(i,j)=[2*R(i,j+1)+3*G(i,j+1)+4*B(i,j+1)]*B(i,j+1)
v2(i,j)=[2*R(i+2,j+1)+3*G(i+2,j+1)+4*B(i+2,j+1)]*B(i+2,j+1)
Figure BDA0003165937260000032
u3(i,j)=[2*R(i,j)+3*G(i,j)+4*B(i,j)]*C(i,j)
v3(i,j)=[2*R(i+2,j+2)+3*G(i+2,j+2)+4*B(i+2,j+2)]*C(i+2,j+2)
Figure BDA0003165937260000033
u4(i,j)=[2*R(i+2,j)+3*G(i+2,j)+4*B(i+2,j)]*D(i+2,j)
v4(i,j)=[2*R(i,j+2)+3*G(i,j+2)+4*B(i,j+2)]*D(i,j+2)
Figure BDA0003165937260000034
by calculation, the module can generate a contour map and weight the intensity of the pixel against the edge feature. The edge strength and direction are then stored into the DDR for future reference.
And D, adopting a deep learning module in the video image fusion processing process in the step two, wherein the deep learning module adopts a back propagation neural network algorithm to perform pattern matching and classification.
The invention adopts the scheme of the FPGA, and the FPGA supports the DDR memory, so that a plurality of video data frames can be stored; multiple data frames are important for post-design processing because the frame phases (visible, infrared and map frame boundaries) of each input do not occur simultaneously, so the frame synchronizer needs to delay the video so that all data frames can be aligned; using the visible light view as a reference video sequence, both the infrared view and the map view need to be delayed or accelerated by inserting frames from time to time or deleting frames from time to time in order to keep the edges of each frame appearing at the same time as the visible reference frame; the deep learning module uses a back propagation neural network algorithm to carry out pattern matching and classification, realizes the alignment of digital data and physical visual data captured by a camera, gives relatively accurate azimuth information in view of GPS and vehicle positions, but in actual life, the method also has the unknown factors such as vibration, measurement tolerance, data collection delay and other unknown factors which can cause inaccurate superposition, and needs to adopt an extension technology in order to obtain precision in one pixel; the deep learning module directly processes the captured video and searches information related to data captured in a topographic map, a satellite image and a radar obstacle map in real time from the video, and the basic processing process comprises the steps of extracting edges from various target objects, and seeking to further adjust the direction of the superposed image to the video image through the known direction data of each object so as to obtain the precision of the next level; and aiming at different objects, the deep learning module extracts the characteristics of the objects in different modes, and finally realizes the fusion of the images.
The invention has the advantages that: the method not only solves the defects of the customized circuit, but also overcomes the defect of limited gate circuits of the original programmable device, and greatly improves the speed and the processing effect of image fusion processing.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Detailed Description
The following further describes embodiments of the present invention in conjunction with the attached figures:
an image fusion processing method based on FPGA acceleration is characterized in that: the method is characterized by comprising the following steps:
the method is characterized by comprising the following steps:
the method comprises the following steps: video image signal reception: receiving a composite visual video image signal generated by a user board and an infrared video image signal input by a night vision system;
step two: video image fusion processing: carrying out image fusion processing on the synthesized visual video image signal and the infrared video image signal to form an enhanced synthesized visual video image signal;
step three: video image signal display control and output: and selecting, outputting and displaying the composite view video image signal and the enhanced composite view video image signal.
In an embodiment, the video image fusion processing in the second step includes the following steps:
the first step is as follows: respectively processing the input infrared video image signal and the vision video image signal by adopting two analysis sub-modules;
the second step is that: collecting all analysis results from two analysis submodules input from the infrared video image signal and the visual video image signal, comparing the results of the two modules, and creating a weight for each pixel between the two video streams;
the third step: two pixels at the same position in the two video streams will be combined by the merging module according to the following equation:
Pixel_merge=w_visible*pixel_visible+w_inferred*pixel_inferred
Where w_visible+w_inferred=1.0
0<=pixel_merge<=255
the weights are chosen so that the best visual result from the two input streams is selected for the merged output.
In an embodiment, the analysis sub-module processing procedure in the first step includes the following steps:
step a): receiving RGB signals of 8-bit input video streams in real time in parallel, and storing the signals into a DDR memory;
step b): the DDR data frame acquisition module is used for capturing a frame stored in the DDR memory, and meanwhile, the DDR front frame acquisition module is used for acquiring previous frame data from the DDR memory so as to calculate the motion difference between the previous frame and the current frame through motion search, and therefore, the position of the current video in a map can be better determined through the motion of a pixel level;
step c): frame data captured by the DDR data frame acquisition module is processed by the time filter module, the motion search module and the edge detection module and then output to the DDR memory for storage, and frame data acquired by the DDR preamble frame acquisition module is processed by the motion search module and then output to the DDR memory for storage.
In an embodiment, the motion search module in step b) will search for the motion of each pixel from the previous frame image through a motion search engine, which will find the motion vector according to the following procedure:
Figure BDA0003165937260000051
in an embodiment, the temporal filter module in step c) may extract relevant key information from noise and distortion aspects by this module. The temporal filter is defined as follows:
pixel_out(x,y)=pixel_temp(x,y)*w(x,y)+pixel_in(x,y)*(1-w(x,y))
where w (x, y) is the weight of each pixel. w is dynamic and is a function of the output of the motion search engine. And the motion search result can be read from the DDR through the DDR read-write interface.
In the embodiment, the edge detection module in the step c) performs edge detection feature extraction and optimization selection. The RGB color space is processed to generate directional color difference results. The following formula is used for conversion:
Pixel(i,j)=2*red(i,j)+3*green(i,j)+4*blue(i,j)
the directional color difference was calculated using four masks, which are schematically shown as follows:
0 0 0 0 4 0 4 0 0 0 0 -4
4 0 -4 0 0 0 0 0 0 0 0 0
0 0 0 0 -4 0 0 0 -4 4 0 0
the mathematical formula for the whole mask process is as follows:
u1(i,j)=[2*R(i+1,j)+3*G(i+1,j)+4*B(i+1,j)]*A(i+1,j)
v1(i,j)=[2*R(i+1,j+2)+3*G(i+1,j+2)+4*B(i+1,j+2)]*A(i+1,j+2)
Figure BDA0003165937260000061
u2(i,j)=[2*R(i,j+1)+3*G(i,j+1)+4*B(i,j+1)]*B(i,j+1)
v2(i,j)=[2*R(i+2,j+1)+3*G(i+2,j+1)+4*B(i+2,j+1)]*B(i+2,j+1)
Figure BDA0003165937260000062
u3(i,j)=[2*R(i,j)+3*G(i,j)+4*B(i,j)]*C(i,j)
v3(i,j)=[2*R(i+2,j+2)+3*G(i+2,j+2)+4*B(i+2,j+2)]*C(i+2,j+2)
Figure BDA0003165937260000063
u4(i,j)=[2*R(i+2,j)+3*G(i+2,j)+4*B(i+2,j)]*D(i+2,j)
v4(i,j)=[2*R(i,j+2)+3*G(i,j+2)+4*B(i,j+2)]*D(i,j+2)
Figure BDA0003165937260000064
by calculation, the module can generate a contour map and weight the intensity of the pixel against the edge feature. The edge strength and direction are then stored into the DDR for future reference.
In the embodiment, a deep learning module is adopted in the video image fusion processing process in the step two, and the deep learning module adopts a back propagation neural network algorithm to perform pattern matching and classification.
The invention adopts the scheme of the FPGA, and the FPGA supports the DDR memory, so that a plurality of video data frames can be stored; multiple data frames are important for post-design processing because the frame phases (visible, infrared and map frame boundaries) of each input do not occur simultaneously, so the frame synchronizer needs to delay the video so that all data frames can be aligned; using the visible light view as a reference video sequence, both the infrared view and the map view need to be delayed or accelerated by inserting frames from time to time or deleting frames from time to time in order to keep the edges of each frame appearing at the same time as the visible reference frame; the deep learning module uses a back propagation neural network algorithm to carry out pattern matching and classification, realizes the alignment of digital data and physical visual data captured by a camera, gives relatively accurate azimuth information in view of GPS and vehicle positions, but in actual life, the method also has the unknown factors such as vibration, measurement tolerance, data collection delay and other unknown factors which can cause inaccurate superposition, and needs to adopt an extension technology in order to obtain precision in one pixel; the deep learning module directly processes the captured video and searches information related to data captured in a topographic map, a satellite image and a radar obstacle map in real time from the video, and the basic processing process comprises the steps of extracting edges from various target objects, and seeking to further adjust the direction of the superposed image to the video image through the known direction data of each object so as to obtain the precision of the next level; and aiming at different objects, the deep learning module extracts the characteristics of the objects in different modes, and finally realizes the fusion of the images.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (5)

1. An image fusion processing method based on FPGA acceleration is characterized by comprising the following steps:
the method comprises the following steps: video image signal reception: receiving a composite visual video image signal generated by a user board and an infrared video image signal input by a night vision system;
step two: video image fusion processing: carrying out image fusion processing on the synthesized visual video image signal and the infrared video image signal to form an enhanced synthesized visual video image signal;
step three: video image signal display control and output: and selecting, outputting and displaying the composite view video image signal and the enhanced composite view video image signal.
2. The image fusion processing method based on FPGA acceleration according to claim 1, characterized in that: the video image fusion processing in the second step comprises the following steps:
the first step is as follows: respectively processing the input infrared video image signal and the vision video image signal by adopting two analysis sub-modules;
the second step is that: collecting all analysis results from two analysis submodules input from the infrared video image signal and the visual video image signal, comparing the results of the two modules, and creating a weight for each pixel between the two video streams;
the third step: two pixels at the same position in the two video streams will be combined by the merging module according to the following equation:
Pixel_merge=w_visible*pixel_visible+w_inferred*pixel_inferred
Where w_visible+w_inferred=1.0
0<=pixel_merge<=255
the weights are chosen so that the best visual result from the two input streams is selected for the merged output.
3. The image fusion processing method based on FPGA acceleration according to claim 2, characterized in that: the analysis submodule processing process in the first step comprises the following steps:
step a): receiving RGB signals of 8-bit input video streams in real time in parallel, and storing the signals into a DDR memory;
step b): the DDR data frame acquisition module is used for capturing a frame stored in the DDR memory, and meanwhile, the DDR front frame acquisition module is used for acquiring previous frame data from the DDR memory so as to calculate the motion difference between the previous frame and the current frame through motion search, and therefore, the position of the current video in a map can be better determined through the motion of a pixel level;
step c): frame data captured by the DDR data frame acquisition module is processed by the time filter module, the motion search module and the edge detection module and then output to the DDR memory for storage, and frame data acquired by the DDR preamble frame acquisition module is processed by the motion search module and then output to the DDR memory for storage.
4. The image fusion processing method based on FPGA acceleration according to claim 3, characterized in that: the motion search module in step b) searches the motion of each pixel according to the previous frame image through a motion search engine, and the motion search engine finds the motion vector according to the following procedures:
(x,y)=current frame pixel location
(xx,yy)=prev frame pixel location
(aa,bb)=SAD square block offset
SADmin=maxval;
for(xx=x-5;xx<=x+5;xx++)
for(yy=y-5;yy<=y+5;yy++)
{Sad=0;
for(aa=-4;aa<=4;aa++)
for(bb=-4;bb<=4;bb++)
Sad=Sad+1;
abs(pixel_currframe(xx,yy)–pixel_prevframe(xx+aa,yy+bb));
if(Sad<SADmin)POSmin=(xx,yy);}。
5. the image fusion processing method based on FPGA acceleration according to claim 1, characterized in that: and in the second step, a deep learning module is adopted in the video image fusion processing process, and the deep learning module adopts a back propagation neural network algorithm to perform pattern matching and classification.
CN202110804750.4A 2021-07-16 2021-07-16 Image fusion processing method based on FPGA acceleration Pending CN113470055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110804750.4A CN113470055A (en) 2021-07-16 2021-07-16 Image fusion processing method based on FPGA acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110804750.4A CN113470055A (en) 2021-07-16 2021-07-16 Image fusion processing method based on FPGA acceleration

Publications (1)

Publication Number Publication Date
CN113470055A true CN113470055A (en) 2021-10-01

Family

ID=77880567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110804750.4A Pending CN113470055A (en) 2021-07-16 2021-07-16 Image fusion processing method based on FPGA acceleration

Country Status (1)

Country Link
CN (1) CN113470055A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379827A (en) * 2006-01-31 2009-03-04 汤姆森许可贸易公司 Methods and apparatus for edge-based spatio-temporal filtering
CN103593494A (en) * 2013-07-19 2014-02-19 北京赛四达科技股份有限公司 System and method for generating visible light and infrared images in real time
US20150045994A1 (en) * 2013-08-08 2015-02-12 Honeywell International Inc. System and method for highlighting an area encompassing an aircraft that is free of hazards
US8976042B1 (en) * 2012-09-24 2015-03-10 Rockwell Collins, Inc. Image combining system, device, and method of multiple vision sources
CN205898301U (en) * 2016-04-12 2017-01-18 金陵科技学院 System on chip/SOC thermal infrared imager
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
CN108270968A (en) * 2017-12-30 2018-07-10 广东金泽润技术有限公司 A kind of infrared and visual image fusion detection system and method
CN109389617A (en) * 2018-08-27 2019-02-26 深圳大学 A kind of motion estimate based on piece heterogeneous system and method for tracing and system
CN109919887A (en) * 2019-02-25 2019-06-21 中国人民解放军陆军工程大学 Unsupervised image fusion method based on deep learning
CN111223191A (en) * 2020-01-02 2020-06-02 中国航空工业集团公司西安航空计算技术研究所 Large-scale scene infrared imaging real-time simulation method for airborne enhanced synthetic vision system
CN112651469A (en) * 2021-01-22 2021-04-13 西安培华学院 Infrared and visible light image fusion method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101379827A (en) * 2006-01-31 2009-03-04 汤姆森许可贸易公司 Methods and apparatus for edge-based spatio-temporal filtering
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
US8976042B1 (en) * 2012-09-24 2015-03-10 Rockwell Collins, Inc. Image combining system, device, and method of multiple vision sources
CN103593494A (en) * 2013-07-19 2014-02-19 北京赛四达科技股份有限公司 System and method for generating visible light and infrared images in real time
US20150045994A1 (en) * 2013-08-08 2015-02-12 Honeywell International Inc. System and method for highlighting an area encompassing an aircraft that is free of hazards
CN205898301U (en) * 2016-04-12 2017-01-18 金陵科技学院 System on chip/SOC thermal infrared imager
CN108270968A (en) * 2017-12-30 2018-07-10 广东金泽润技术有限公司 A kind of infrared and visual image fusion detection system and method
CN109389617A (en) * 2018-08-27 2019-02-26 深圳大学 A kind of motion estimate based on piece heterogeneous system and method for tracing and system
CN109919887A (en) * 2019-02-25 2019-06-21 中国人民解放军陆军工程大学 Unsupervised image fusion method based on deep learning
CN111223191A (en) * 2020-01-02 2020-06-02 中国航空工业集团公司西安航空计算技术研究所 Large-scale scene infrared imaging real-time simulation method for airborne enhanced synthetic vision system
CN112651469A (en) * 2021-01-22 2021-04-13 西安培华学院 Infrared and visible light image fusion method and system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AHMED F. FADHIL等: "Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection", 《SENSORS》, vol. 19, pages 1 - 17 *
RANDALL E. BAILEY等: "Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems", 《ENHANCED AND SYNTHETIC VISION》, pages 1 - 18 *
叶亚洲: "动态三维复杂场景感知及其增强合成视景技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 02, pages 138 - 1047 *
张伟: "民机新一代驾驶舱显示技术", 《民用飞机设计与研究》, pages 4 - 7 *
王佳慧等: "用于HVS 的拉普拉斯金字塔变换图像融合研究", 《电光与控制》, vol. 26, no. 1, pages 77 - 80 *
高成志等: "头部穿戴显示器在民用飞机上的应用", 《河南科技》, no. 10, pages 98 - 101 *

Similar Documents

Publication Publication Date Title
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN106780620B (en) Table tennis motion trail identification, positioning and tracking system and method
CN105933678B (en) More focal length lens linkage imaging device based on Multiobjective Intelligent tracking
Toet et al. Merging thermal and visual images by a contrast pyramid
CN109636771B (en) Flight target detection method and system based on image processing
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
US20080181487A1 (en) Method and apparatus for automatic registration and visualization of occluded targets using ladar data
CN109460764B (en) Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN112801870B (en) Image splicing method based on grid optimization, splicing system and readable storage medium
CN113160106B (en) Infrared target detection method and device, electronic equipment and storage medium
CN109446978B (en) Method for tracking moving target of airplane based on staring satellite complex scene
CN106257537B (en) A kind of spatial depth extracting method based on field information
CN112907493A (en) Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN111833384B (en) Method and device for rapidly registering visible light and infrared images
CN113658329A (en) Building object frame model fine three-dimensional modeling method and system
CN106910178B (en) Multi-angle SAR image fusion method based on tone statistical characteristic classification
CN110430400B (en) Ground plane area detection method of binocular movable camera
CN113470055A (en) Image fusion processing method based on FPGA acceleration
CN115830567A (en) Road target fusion sensing method and system under low-light condition
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
CN116777953A (en) Remote sensing image target tracking method based on multi-scale feature aggregation enhancement
CN111583315A (en) Novel visible light image and infrared image registration method and device
CN115482257A (en) Motion estimation method integrating deep learning characteristic optical flow and binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination