CN111192229B - Airborne multi-mode video picture enhancement display method and system - Google Patents

Airborne multi-mode video picture enhancement display method and system Download PDF

Info

Publication number
CN111192229B
CN111192229B CN202010003148.6A CN202010003148A CN111192229B CN 111192229 B CN111192229 B CN 111192229B CN 202010003148 A CN202010003148 A CN 202010003148A CN 111192229 B CN111192229 B CN 111192229B
Authority
CN
China
Prior art keywords
video frame
image
color video
mode
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010003148.6A
Other languages
Chinese (zh)
Other versions
CN111192229A (en
Inventor
程岳
李亚晖
韩伟
文鹏程
刘作龙
余冠锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010003148.6A priority Critical patent/CN111192229B/en
Publication of CN111192229A publication Critical patent/CN111192229A/en
Application granted granted Critical
Publication of CN111192229B publication Critical patent/CN111192229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of onboard graphic image processing, and relates to an onboard multi-mode video picture enhancement display method and system. According to the method, through sub-pixel precision matching of the real-time multi-mode video and the color video, and weighting and fusing of two types of video sources based on saliency information in an image brightness space, color video information is further transferred to the multi-mode video information in a chromaticity and saturation space, organic fusion of the two types of data of the multi-mode video and the color video is achieved, real-time characteristics of the multi-mode video are reserved by the fused video, color texture information of the abundant color video meeting pilot observation is added, and readability of airborne multi-mode video pictures is enhanced. The invention can improve the space situation awareness capability of pilots on airport runways and obstacles under the condition of low visibility, further reduce typical accidents such as controllable flight collision, runway invasion and the like in the process of approaching and landing of the aircraft, and improve the safety of the aircraft.

Description

Airborne multi-mode video picture enhancement display method and system
Technical Field
The invention belongs to the field of onboard graphic image processing, and relates to an onboard multi-mode video picture enhancement display method.
Background
And a Comprehensive Vision System (CVS) is used for providing the pilot with equivalent visual video picture information of an airport runway, dangerous terrain and obstacles in a flight approaching stage through fusion of the multi-mode video and the three-dimensional digital map. The method combines the advantages of large view field, high resolution and true color of a Synthetic Vision System (SVS) and the advantages of real-time multi-mode (long-wave infrared, short-wave infrared and millimeter wave) video pictures of an Enhanced Vision System (EVS). The CVS real-time visual information of virtual-real fusion obviously improves the scene awareness of pilots, and enhances the flight safety.
In the traditional comprehensive vision system, due to errors such as inherent navigation parameters, sensor calibration and the like, a synthesized vision picture is not matched with an enhanced vision picture to a certain extent, and a multi-mode video picture is generally embedded in the synthesized vision virtual picture in a picture-in-picture mode. Therefore, in the output picture of the integrated vision system, the multimode video part basically presents monochromatic real-time video sensor information, and lacks identifiable real color information and detail information.
In order to enhance the display content of the multi-mode video picture in the comprehensive vision system, the runway and obstacle information is marked by mainly adopting virtually generated wire frames, characters and color blocks. Also, due to the existence of geometric errors, the marking information calculated by the comprehensive vision system has errors with the actual position, which causes trouble to the pilot.
Disclosure of Invention
The invention provides an on-board multi-mode video picture enhancement display method, which aims to improve the readability of comprehensive vision multi-mode videos.
The technical scheme of the invention is as follows:
the on-board multi-mode video picture enhancement display method comprises the following steps:
acquiring clear color videos and real-time multi-mode videos shot in a approaching process of pre-acquisition records;
according to the current airborne positioning information and the flight attitude information, intercepting a corresponding color video frame which is acquired and recorded in advance, and carrying out translation scaling on the corresponding color video frame to the scale and the position of the current multi-mode video frame for rough registration;
taking a color video frame as a floating image, taking a multi-mode video frame as a fixed image, carrying out geometric transformation on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image, and realizing optimized registration (obtaining a sub-pixel registration relation with the fixed image);
HSV color space decomposition is carried out on the registered color video frames, and decomposed chrominance, saturation and brightness components are obtained; image fusion is carried out on the brightness component of the color video frame and the multi-mode video frame;
and merging the fused brightness component with the chroma and saturation components of the color video frame, and outputting a fusion result.
Optionally, the on-board positioning information includes longitude, latitude, and altitude; the flight attitude information includes pitch, roll, and yaw data.
Optionally, the rough registration specifically uses longitude, latitude, altitude, pitching, rolling and yawing data of the current aircraft output by the integrated navigation system to sequentially locate and inquire the position of a color video frame which is acquired and recorded in advance; and scaling and translating the color video frame by utilizing the internal parameters of the camera to enable the color video frame to be in rough registration with the current real-time multi-mode video frame picture.
Optionally, the optimized registration, specifically, performing iterative optimization by taking a Normalized Total Gradient (NTG) as an error energy function (optimization objective function) of affine transformation, and outputting a geometric transformation relationship between an accurate color video frame and a real-time multi-mode video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters, so that the color video frame and the real-time multi-mode video frame obtain accurate registration effect of sub-pixel level.
Optionally, the image fusion, specifically, a fusion method based on saliency is adopted, and a Laplacian operator is applied pixel by pixel in a brightness component image of a color video frame and a multi-mode video frame, so as to obtain an initial saliency image; guiding and filtering the initial saliency image, and outputting a smooth saliency image; and taking the saliency image brightness value as a weighting weight, carrying out pixel-by-pixel weighted average on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
Correspondingly, the invention also provides an onboard multi-mode video picture enhancement display system, which comprises:
the video acquisition module is used for acquiring clear color videos and real-time multi-mode videos shot in the approaching process of the pre-acquisition record;
the image registration module is used for intercepting a corresponding color video frame which is acquired and recorded in advance according to the current airborne positioning information and the flight attitude information, and carrying out translation scaling on the corresponding color video frame to the scale and the position of the current multi-mode video frame for rough registration; then taking the color video frame as a floating image, taking the multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image according to an established error energy function related to affine transformation between the floating image and the fixed image to realize optimal registration (obtaining a sub-pixel registration relation with the fixed image);
the brightness image component fusion module is used for carrying out HSV color space decomposition on the registered color video frames to obtain decomposed chromaticity, saturation and brightness components; image fusion is carried out on the brightness component of the color video frame and the multi-mode video frame;
and the image merging output module is used for merging the fused brightness component with the chromaticity and saturation components of the color video frame and outputting a merging result.
Correspondingly, the invention also provides an onboard device which comprises a processor and a program memory, wherein the program stored in the program memory is loaded by the processor to execute the onboard multi-mode video picture enhancement display method.
The invention has the following advantages:
according to the invention, the real-time multi-mode video picture and the pre-recorded clear color video picture are registered and fused, the color information and the texture information of the image are enhanced, the enhanced video picture which accords with the true color and texture is output, the space situation sensing capability of pilots on airport runways and obstacles under the condition of low visibility (including haze, rain and snow, sand and night vision conditions) can be improved, and then the typical accidents such as controllable flying and bumping, runway invasion and the like in the process of approaching and landing of the aircraft are reduced, and the safety of the aircraft is improved.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and examples.
In order to meet the requirements of registration accuracy at the sub-pixel level and multi-mode image fusion output picture with optimized color textures, the method for enhancing and displaying the onboard multi-mode video picture provided by the embodiment mainly comprises two parts, namely accurate registration and weighted fusion, as shown in fig. 1.
In the precise registration section:
the clear color video frames captured during the pre-recorded approach are first queried using on-board GPS information (longitude, latitude, altitude), flight attitude information (pitch, roll, yaw). The approach to flight from 200 feet to 100 feet is essentially a fixed line flight with the camera external parameters acquired by the GPS and inertial navigation devices locating pre-recorded color video frames consistent with the current multi-modal video frames. Meanwhile, the color video frame can be horizontally scaled to the scale and the position of the multi-mode image through the camera internal parameters corresponding to the color video frame and the camera internal parameters corresponding to the multi-mode video so as to obtain a rough registration effect. At this time, the positional deviation between the multi-mode image and the color image is small, and affine transformation can be approximated.
Secondly, color video frames are used as floating images f, and multi-mode video frames are used as fixed images f R An error energy function is established for affine transformation between the floating image and the fixed image. In this embodiment a normalized total gradient (Normalized Total Gradient, NTG) is used as a function of the error energy of the affine transformation between the floating image and the fixed image, i.e
In the above-mentioned method, the step of,is a gradient sign, II 1 Is the L1 norm.
And optimizing and solving the error energy function by an iteration method, and obtaining affine transformation parameters under the condition of minimum error energy function, namely the geometric transformation from the floating image to the fixed image. Finally, a subpixel registration relationship with the fixed image is obtained by geometrically transforming the floating image.
In the weighted fusion part:
firstly, carrying out HSV color space decomposition on the registered color video frames to obtain decomposed chrominance, saturation and brightness image components. Because the multi-mode video frame has only a luminance component, the luminance component of the color video frame is image-fused with the multi-mode video frame. In the process of image fusion, a fusion method based on saliency is adopted. And extracting an image saliency initial value by using a Laplacian operator, and outputting fusion weights after carrying out smooth optimization on the initial saliency map by using a guide filter. And further carrying out pixel-by-pixel weighted average on the brightness components of the color image and the multi-mode image by using the saliency map weighting and outputting a brightness component fusion result. And finally, merging the fused brightness component with the chroma and saturation components of the original color video frame, and outputting a fusion result.
The color video is a clear video shot in sunny weather, so that the color video has rich colors and textures and has higher identification degree. Through migration of colors and textures, the multi-mode video is obviously enhanced, and the output enhanced picture has better readability.
As shown in fig. 1, the specific implementation steps of this embodiment are as follows:
1) And acquiring clear color videos and real-time multi-mode videos shot in the approaching process of the pre-acquisition record.
2) And intercepting a corresponding color video frame which is acquired and recorded in advance according to the current airborne positioning information and the flight attitude information, and carrying out translation scaling on the corresponding color video frame to the scale and the position of the current multi-mode video frame for rough registration. The method specifically comprises the following steps: the method comprises the steps of reading position and posture data such as longitude, latitude, altitude, pitching, rolling and yawing of an airplane through airborne integrated navigation equipment, firstly, inquiring navigation data corresponding to a pre-recorded color video by utilizing the longitude, latitude and altitude data, selecting a time point with matched longitude, latitude and altitude, expanding a certain range around the time point as a center, and then, matching the posture data corresponding to the color video by utilizing the pitching, rolling and yawing data to locate to an accurate time point; after the video matching time point is positioned, capturing a frame picture corresponding to the color video as a fusion video frame to be matched; reading camera internal parameters corresponding to the color camera and the multi-mode camera, including focal length, principal point and distortion parameters, scaling and translating the color video to be matched, so that the corresponding focal length is the same as the focal length of the multi-mode video, and the principal point is the same as the principal point of the multi-mode video; the color video frames are now in coarse registration with the multi-modality video frames.
3) The color video frame is used as a floating image, the multi-mode video frame is used as a fixed image, and the geometric transformation is carried out on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image, so that the optimized registration is realized. The method specifically comprises the following steps: considering that the rough registered color video frame and the multi-mode video frame approximately meet affine transformation relation, carrying out iterative optimization on affine transformation parameters corresponding to the color video frame and the multi-mode video frame as an optimization object and error energy constructed by taking a normalized total gradient as an optimization parameter, and obtaining accurate affine transformation parameters. Because the multi-mode video frame and the color video frame are initially registered, the superposition optimization process can converge more quickly, and the registration parameters of the sub-pixel level are output. The geometric transformation is applied to the color video frames to obtain registered color video frames.
4) HSV color space decomposition is carried out on the registered color video frames, and decomposed chrominance, saturation and brightness components are obtained; and carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame. The method specifically comprises the following steps: firstly, chrominance, saturation and brightness decomposition are carried out on the color video frame subjected to geometric transformation, and a single-channel color video frame brightness image and a single-channel multi-mode image are extracted and fused. Traversing the brightness image and the multi-mode image of the color video frame, and obtaining a rough saliency image by using the Laplacian operator. And taking the brightness image of the color video frame as a guide image, guiding and filtering the saliency image of the color video frame, and outputting the smoothed saliency image of the color video frame. And taking the multi-mode video frame as a guide image, guiding and filtering the multi-mode video frame saliency image, and outputting the smoothed multi-mode video frame saliency image. And then, carrying out normalized weight calculation on the smoothed color video frame saliency image and the multi-mode video frame saliency image, and acquiring a higher weight value by pixels with higher values. And then, the brightness image of the color video frame and the multi-mode video frame are weighted and averaged according to the pixel-by-pixel weight value, and the fused brightness image component is obtained.
5) And merging the fused brightness component with the chromaticity and saturation components of the color video frame, and outputting the fused video frame.

Claims (6)

1. An on-board multi-mode video picture enhancement display method is characterized by comprising the following steps:
acquiring clear color videos and real-time multi-mode videos shot in a approaching process of pre-acquisition records;
according to the current airborne positioning information and the flight attitude information, intercepting a corresponding color video frame which is acquired and recorded in advance, and carrying out translation scaling on the corresponding color video frame to the scale and the position of the current multi-mode video frame for rough registration; the rough registration is specifically to sequentially locate and inquire the position of a color video frame which is acquired and recorded in advance by utilizing longitude, latitude, altitude, pitching, rolling and yawing data of the current aircraft which are output by the integrated navigation system; scaling and translating the color video frame by utilizing the internal parameters of the camera to enable the color video frame to be in rough registration with the current real-time multi-mode video frame picture;
taking a color video frame as a floating image, taking a multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image so as to realize optimized registration;
HSV color space decomposition is carried out on the registered color video frames, and decomposed chrominance, saturation and brightness components are obtained; image fusion is carried out on the brightness component of the color video frame and the multi-mode video frame; the image fusion is specifically realized by adopting a fusion method based on saliency, and Laplacian is applied pixel by pixel in a brightness component image of a color video frame and a multi-mode video frame to obtain an initial saliency image; guiding and filtering the initial saliency image, and outputting a smooth saliency image; taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted average on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component;
and merging the fused brightness component with the chroma and saturation components of the color video frame, and outputting a fusion result.
2. The on-board multi-modal video picture enhancement display method of claim 1, wherein: the on-board positioning information includes longitude, latitude and altitude; the flight attitude information includes pitch, roll, and yaw data.
3. The on-board multi-modal video picture enhancement display method of claim 1, wherein: the optimized registration is specifically to take a Normalized Total Gradient (NTG) as an error energy function of affine transformation to carry out iterative optimization, and output a geometric transformation relation between an accurate color video frame and a real-time multi-mode video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters, so that the color video frame and the real-time multi-mode video frame obtain accurate registration effect of sub-pixel level.
4. An on-board multi-modal video picture enhancement display system, comprising:
the video acquisition module is used for acquiring clear color videos and real-time multi-mode videos shot in the approaching process of the pre-acquisition record;
the image registration module is used for intercepting a corresponding color video frame which is acquired and recorded in advance according to the current airborne positioning information and the flight attitude information, and carrying out translation scaling on the corresponding color video frame to the scale and the position of the current multi-mode video frame for rough registration; then taking the color video frame as a floating image, taking the multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image according to an established error energy function related to affine transformation between the floating image and the fixed image so as to realize optimized registration;
the rough registration is specifically to sequentially locate and inquire the position of a color video frame which is acquired and recorded in advance by utilizing longitude, latitude, altitude, pitching, rolling and yawing data of the current aircraft which are output by the integrated navigation system; scaling and translating the color video frame by utilizing the internal parameters of the camera to enable the color video frame to be in rough registration with the current real-time multi-mode video frame picture;
the brightness image component fusion module is used for carrying out HSV color space decomposition on the registered color video frames to obtain decomposed chromaticity, saturation and brightness components; image fusion is carried out on the brightness component of the color video frame and the multi-mode video frame; the image fusion is specifically realized by adopting a fusion method based on saliency, and Laplacian is applied pixel by pixel in a brightness component image of a color video frame and a multi-mode video frame to obtain an initial saliency image; guiding and filtering the initial saliency image, and outputting a smooth saliency image; taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted average on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component;
and the image merging output module is used for merging the fused brightness component with the chromaticity and saturation components of the color video frame and outputting a merging result.
5. The on-board multi-modality video picture enhancement display system of claim 4, wherein: the optimized registration is specifically to take a Normalized Total Gradient (NTG) as an error energy function of affine transformation to carry out iterative optimization, and output a geometric transformation relation between an accurate color video frame and a real-time multi-mode video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters, so that the color video frame and the real-time multi-mode video frame obtain accurate registration effect of sub-pixel level.
6. An on-board device comprising a processor and a program memory, wherein the program memory stores a program that when loaded by the processor performs the on-board multi-modal video picture enhancement display method of claim 1.
CN202010003148.6A 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system Active CN111192229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010003148.6A CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010003148.6A CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Publications (2)

Publication Number Publication Date
CN111192229A CN111192229A (en) 2020-05-22
CN111192229B true CN111192229B (en) 2023-10-13

Family

ID=70709781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010003148.6A Active CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Country Status (1)

Country Link
CN (1) CN111192229B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145362B (en) * 2020-01-02 2023-05-09 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN112419211B (en) * 2020-09-29 2024-02-02 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231206A (en) * 2011-07-14 2011-11-02 浙江理工大学 Colorized night vision image brightness enhancement method applicable to automotive assisted driving system
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN109492714A (en) * 2018-12-29 2019-03-19 同方威视技术股份有限公司 Image processing apparatus and its method
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109547710A (en) * 2018-10-10 2019-03-29 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing what comes into a driver's and Synthetic vision merge implementation method
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231206A (en) * 2011-07-14 2011-11-02 浙江理工大学 Colorized night vision image brightness enhancement method applicable to automotive assisted driving system
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109547710A (en) * 2018-10-10 2019-03-29 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing what comes into a driver's and Synthetic vision merge implementation method
CN109492714A (en) * 2018-12-29 2019-03-19 同方威视技术股份有限公司 Image processing apparatus and its method
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A prototype of Enhanced Synthetic Vision System using short-wave infrared;Yue Cheng;《2018 IEEE/AIAA 37TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC)》;1071-1077 *
Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter;Wei Gan;《Infrared Physics & Technology》;37-51 *
Shu-Jie Chen.Normalized Total Gradient: A New Measure for Multispectral Image Registration.《IEEE Transactions on Image Processing》.2017,第1297-1310页. *
齐小谦.一种直升机合成视景辅助导航技术.《无线电工程》.2019,第499-503页. *

Also Published As

Publication number Publication date
CN111192229A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN111145362B (en) Virtual-real fusion display method and system for airborne comprehensive vision system
US11748898B2 (en) Methods and system for infrared tracking
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN111179168B (en) Vehicle-mounted 360-degree panoramic all-around monitoring system and method
US11380111B2 (en) Image colorization for vehicular camera images
CN112364707B (en) System and method for performing beyond-the-horizon perception on complex road conditions by intelligent vehicle
CN111192229B (en) Airborne multi-mode video picture enhancement display method and system
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
CN113066050B (en) Method for resolving course attitude of airdrop cargo bed based on vision
US9726486B1 (en) System and method for merging enhanced vision data with a synthetic vision data
CN114973028A (en) Aerial video image real-time change detection method and system
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
WO2021026855A1 (en) Machine vision-based image processing method and device
Liu et al. Sensor fusion method for horizon detection from an aircraft in low visibility conditions
CN108195359B (en) Method and system for acquiring spatial data
CN109961043A (en) A kind of single wooden height measurement method and system based on unmanned plane high resolution image
CN117727011A (en) Target identification method, device, equipment and storage medium based on image fusion
CN113253619B (en) Ship data information processing method and device
CN111833384B (en) Method and device for rapidly registering visible light and infrared images
Cheng et al. Infrared Image Enhancement by Multi-Modal Sensor Fusion in Enhanced Synthetic Vision System
Zhang et al. A novel farmland boundaries extraction and obstacle detection method based on unmanned aerial vehicle
CN116718165B (en) Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant