CN111145362B - Virtual-real fusion display method and system for airborne comprehensive vision system - Google Patents

Virtual-real fusion display method and system for airborne comprehensive vision system Download PDF

Info

Publication number
CN111145362B
CN111145362B CN202010001858.5A CN202010001858A CN111145362B CN 111145362 B CN111145362 B CN 111145362B CN 202010001858 A CN202010001858 A CN 202010001858A CN 111145362 B CN111145362 B CN 111145362B
Authority
CN
China
Prior art keywords
image
fusion
virtual
picture
map picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010001858.5A
Other languages
Chinese (zh)
Other versions
CN111145362A (en
Inventor
程岳
李亚晖
刘作龙
文鹏程
余冠锋
韩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010001858.5A priority Critical patent/CN111145362B/en
Publication of CN111145362A publication Critical patent/CN111145362A/en
Application granted granted Critical
Publication of CN111145362B publication Critical patent/CN111145362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of onboard graphic image processing, and relates to a virtual-real fusion display method and system of an onboard comprehensive vision system. According to the virtual-real fusion display method of the airborne comprehensive vision system, disclosed by the invention, the accurate positions of the runway and the obstacle are accurately indicated through the sub-pixel accuracy matching of the real-time multi-mode video and the synthetic vision map picture, the comprehensive vision picture is enhanced in color and texture through image and graphic fusion, and the geometric correction fusion is further carried out on the three-dimensional image of the obstacle, so that the organic fusion display of the multi-mode video and the synthetic vision map picture is finally realized. The method can effectively improve the accurate perception of the space position form of the airfield runway and the obstacle by the pilot under the condition of low visibility, improve the situational awareness, reduce the typical accidents such as controllable flight collision, runway invasion and the like in the process of flying near and landing, and improve the safety of the aircraft.

Description

Virtual-real fusion display method and system for airborne comprehensive vision system
Technical Field
The invention belongs to the field of onboard graphic image processing, and relates to a virtual-real fusion display method of an onboard comprehensive vision system.
Background
And a Comprehensive Vision System (CVS) or an Enhanced Synthetic Vision System (ESVS), wherein the integrated vision system (CVS) or the Enhanced Synthetic Vision System (ESVS) is used for providing the pilot with equivalent visual video picture information of an airport runway, dangerous terrain and obstacles in the flight approaching stage through the fusion of the multi-mode video and the three-dimensional digital map. The method combines the advantages of large view field, high resolution and true color of a Synthetic Vision System (SVS) and the advantages of real-time multi-mode (long-wave infrared, short-wave infrared and millimeter wave) video penetrating weather obstacle imaging of an Enhanced Vision System (EVS). The virtual-real integrated comprehensive vision real-time picture obviously improves the scene consciousness of pilots and enhances the flight safety.
In the traditional comprehensive vision system, due to errors such as inherent navigation parameters and sensor calibration, a certain registration error exists between a synthesized vision picture and an enhanced vision picture, and in order to reduce perception barriers caused by inconsistency of virtual and real pictures, two existing solutions exist, one is to window and display a multi-mode video picture in the synthesized vision virtual picture in a picture-in-picture mode, and neglect inconsistency of window edges. Another is to superimpose a simplified displayed map wire frame (frame) graphic on the multi-modal video picture, indicating the runway, obstacle and terrain.
The two modes inevitably have certain picture inconsistency, and interfere the cognition and judgment of pilots on the environment. In addition, the synthesized view picture and the enhanced view picture do not obtain accurate registration and fusion, and lack accurate key position indication of runways, obstacles and the like and detail information of identifiable real scene color textures and the like.
Disclosure of Invention
The invention provides a method for fusion display of virtual and real images of a comprehensive view, which aims to improve the accuracy of virtual and real fusion of the comprehensive view and the readability of the images.
The technical scheme of the invention is as follows:
the virtual-real fusion display method of the airborne comprehensive vision system comprises the following steps:
1) Acquiring current airborne positioning information, corresponding map elevation data and an orthophoto, and combining flight attitude information to obtain a rough registration map picture consistent with a current frame of the multi-mode video sensor;
2) Taking the rough registration map picture as a floating image f and taking a current multi-mode video frame as a reference image f R Optimizing to obtain a corrected map picture with sub-pixel registration accuracy according to the established error energy function related to affine transformation between the floating image and the reference image;
3) Carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components, and carrying out image fusion on the brightness components and the multi-mode video;
4) Merging the fused brightness component with the chromaticity and saturation components of the color video frame, and outputting a fusion result;
5) And (3) superposing the established barrier model in the fused image picture in the step (4), and rendering and outputting a virtual-real fused comprehensive view picture.
Based on the scheme, the invention further optimizes the following steps:
optionally, step 1) specifically comprises: and inquiring and synthesizing a three-dimensional digital map database of the view based on airborne positioning information, obtaining map elevation data near an airport and a map orthographic image of a set range of the forward viewing direction of the airplane, and performing perspective projection transformation on map pictures (namely pictures rendered based on the map elevation data and the map orthographic image) by combining the flight attitude information and sensor internal parameters (main points, focal lengths and the like) obtained by calibrating the multi-mode video sensor to obtain a rough registration map picture consistent with the current frame of the multi-mode video sensor.
Optionally, in step 1), the rough registration map picture is drawn according to a pinhole imaging principle in computer vision.
Optionally, step 2) specifically comprises: adopting a Normalized Total Gradient (NTG) as an error energy function of affine transformation between the floating image and the reference image, and optimizing and solving the error energy function by an iteration method to obtain affine transformation parameters under the condition of minimum error energy function, namely, the geometric transformation relation between the floating image and the reference image; and finally, performing geometric transformation on the floating image to obtain a corrected map picture with sub-pixel registration accuracy.
Optionally, in step 3), the luminance component and the multi-mode video are subjected to image fusion, specifically, a fusion method based on saliency is adopted, and a laplace operator is applied pixel by pixel in the luminance component image of the color video frame and the multi-mode video frame, so as to obtain an initial saliency image; guiding and filtering the initial saliency image, and outputting a smooth saliency image; and taking the saliency image brightness value as a weighting weight, carrying out pixel-by-pixel weighted average on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
Optionally, in step 5), the obstacle model is obtained by recalculating the obstacle point, line, plane position and texture coordinates in the three-dimensional space according to the geometric transformation relation between the floating image and the reference image (corresponding to the corrected map image obtained in step 2).
The on-board positioning information includes longitude, latitude and altitude; the flight attitude information includes pitch, roll, yaw data.
Correspondingly, the invention also provides an onboard comprehensive vision system virtual-real fusion display system, which comprises:
the coarse registration module is used for acquiring current airborne positioning information, corresponding map elevation data and an orthographic image, and combining flight attitude information to obtain a coarse registration map picture consistent with the current frame of the multi-mode video sensor;
the fine registration module is used for taking the coarse registration map picture as a floating image f and taking a current multi-mode video frame as a reference image f R Optimizing to obtain a corrected map picture with sub-pixel registration accuracy according to the established error energy function related to affine transformation between the floating image and the reference image;
the brightness component fusion module is used for carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components, and carrying out image fusion on the brightness components and the multi-mode video;
the HSV component merging module is used for merging the fused brightness component with the chromaticity and saturation components of the color video frame and outputting a fusion result;
and the rendering output module is used for superposing the established barrier model in the fused image picture, and rendering and outputting the virtual-real fused comprehensive view picture.
Optionally, the obstacle model is obtained by obtaining a corrected map picture obtained by the fine registration module and recalculating the positions of obstacle points, lines, surfaces and texture coordinates in a three-dimensional space.
Correspondingly, the invention also provides an onboard device which comprises a processor and a program memory, wherein the program stored in the program memory is loaded by the processor to execute the virtual-real fusion display method of the onboard comprehensive vision system.
The invention has the following advantages:
according to the invention, through accurate registration and natural fusion of the real-time multi-mode video and the comprehensive view three-dimensional map picture, the enhanced comprehensive view system display picture with real colors and textures is output, the runway, dangerous terrain and high and large obstacle information are accurately indicated, the space scene perception capability of pilots on the runway and the obstacles of the airport under the condition of low visibility (including haze, rain and snow, sand and dust and night vision conditions) can be improved, and then the typical accidents such as controllable flight collision, runway invasion and the like in the process of approaching and landing of the airplane are reduced, and the safety of the airplane is improved.
Drawings
Fig. 1 is a schematic flow chart of a virtual-real fusion display method of an airborne integrated vision system.
Fig. 2 is a schematic diagram of a geometric correction flow of an obstacle model.
Detailed Description
The invention is further described in detail below with reference to the drawings and examples.
In order to accurately indicate key positions such as runways, obstacles and the like in an output picture of the integrated vision system and simultaneously truly display scene color texture information, the embodiment provides a virtual-real fusion display method of an airborne integrated vision system, which is mainly divided into two parts, namely map video registration and image and graphic fusion, as shown in fig. 1.
1. Map video registration section:
firstly, an airborne GPS information (longitude, latitude and altitude) is utilized to inquire a three-dimensional digital map database of the synthesized view, and map elevation data and orthographic images in the vicinity of an airport and in a certain range of the forward direction of the airplane are obtained. And performing perspective projection transformation on the map picture by combining external parameters such as flight attitude information (pitching, rolling and yawing) and internal parameters such as principal point and focal length obtained by calibrating the multi-mode video sensor, and drawing a rough registration map picture consistent with the current frame of the multi-mode video sensor according to a pinhole imaging principle in computer vision.
Secondly, considering that the map picture and the comprehensive view real-time multi-mode video frame meet the affine transformation relation of the two-dimensional plane, taking the map picture as a floating image f and taking the multi-mode video frame as a reference image f R An error energy function is established for affine transformation between the floating image and the reference image. In this embodiment a normalized total gradient (Normalized Total Gradient, NTG) is used as an energy function of affine transformation between floating and reference images, i.e
Figure BDA0002353786160000041
In the above-mentioned method, the step of,
Figure BDA0002353786160000042
is a gradient sign, II 1 Is the L1 norm.
And optimizing and solving the error energy function by an iteration method, and obtaining affine transformation parameters under the condition of minimum error energy function, namely the geometric transformation from the floating image to the reference image.
And finally, performing geometric transformation on the floating image to obtain a corrected map picture which is registered with the reference image by sub-pixels.
2. Image graphics fusion:
firstly, carrying out HSV color space decomposition on the registered map picture to obtain decomposed chrominance, saturation and brightness image components. Because the multi-mode video frame has only a luminance component, the luminance component of the map picture is image-fused with the multi-mode video. In the process of image fusion, a fusion method based on saliency is adopted. And extracting an image saliency initial value by using a Laplacian operator, and outputting fusion weights after carrying out smooth optimization on the initial saliency image by using a guide filter. And further carrying out pixel-by-pixel weighted average on the map picture brightness component and the multi-mode image by using the saliency map weighting and outputting a brightness component fusion result. And finally, merging the fused brightness component with the chroma and saturation components of the original color video frame, and outputting the video frame which is weighted and fused based on the image saliency.
Further, according to the affine transformation relation between the precisely registered map frames and the multi-mode video plane, the positions and texture coordinates of the obstacle points, lines and planes are recalculated in the three-dimensional space and are overlapped in the fused image frames, finally, the fused multi-mode video frames and the corrected obstacle models are uniformly rendered by using graphics engines such as OpenGL, and the virtual and real fused comprehensive vision frames are output.
Because the multi-mode video and the map texture in the comprehensive view are accurately registered, the planar runway images and the three-dimensional obstacle model are accurately indicated. Meanwhile, the map picture is a clear image shot in sunny weather, so that the map picture has rich colors and textures and has higher identification degree. The virtual-real fusion display comprehensive visual picture has better readability through the migration of colors and textures.
An example application is given below:
in the map video registration part, the longitude and latitude of an airplane at the current video frame time point are read through an onboard integrated navigation device, a three-dimensional digital map database is queried, three-dimensional digital map elevation Data (DEM) and an orthographic image (DOM) within the same view angle of the forward-looking multi-mode video camera in the forward-looking direction of the airplane at the current position are retrieved, the current longitude, latitude, altitude information and pitching, rolling and yawing information are converted into a geocentric earth fixed (ECEF) coordinate system, and the synthetic view map picture under the virtual video camera is calculated by combining the relative rotation translation parameters between the multi-mode video sensor and the machine body coordinate system and the internal parameters such as the principal point, focal length and the like of the multi-mode video sensor.
Because of errors of navigation parameters and calibration data, a map picture generated by the virtual camera has a certain registration error with a video frame imaged in real time. The affine transformation relation is approximately considered to be satisfied between the map picture and the real-time video frame, affine transformation parameters corresponding to the map picture and the multi-mode video frame are taken as optimization objects, error energy constructed by taking the normalized total gradient as the optimization parameters is subjected to iterative optimization, and accurate affine transformation parameters are obtained. Because the map picture and the multi-mode video frame are already registered initially, the iterative optimization process can converge faster, and the registration parameters of the sub-pixel level are output.
The pixel position of any pixel point of the multi-mode video frame in the map picture can be positioned by applying the perspective transformation model and the accurate registration model. In order to prevent the phenomenon of data loss such as 'white-out', a map picture after correction is generated by adopting a reverse mapping interpolation method. And (3) applying the geometric transformation to any point position on the multi-mode video, inquiring the pixel position in the map picture, and performing bilinear interpolation with adjacent pixels to obtain a corresponding pixel value of the map picture. And finally, obtaining the precisely registered synthetic vision virtual map picture after reverse mapping.
In the image and graph fusion part, HSV chromaticity, saturation and brightness decomposition are firstly applied to the registered synthetic vision virtual map picture, and a map picture brightness image of a single channel and a multi-mode image of the single channel are extracted for fusion. Traversing the map picture brightness image and the multi-mode image, and obtaining a rough saliency image by using the Laplacian operator. And taking the map picture brightness image as a guide image, guiding and filtering the saliency image corresponding to the map picture brightness, and outputting the smoothed map picture brightness saliency image. And taking the multi-mode video frame as a guide image, guiding and filtering the multi-mode video frame saliency image, and outputting the smoothed multi-mode video frame saliency image. And then, carrying out normalized weight calculation on the corresponding saliency image of the smoothed map picture brightness image and the saliency image of the multi-mode video frame, and acquiring a higher weight value by the pixel with a higher value. And then, the brightness image corresponding to the map picture and the multi-mode video frame image are weighted and averaged according to the pixel-by-pixel weight value, and the fused brightness image is obtained. And merging the fused brightness image with the chromaticity image and the saturation image of the map picture, and outputting the fused multi-mode video frame.
After the image fusion is completed, geometric correction fusion is carried out on the three-dimensional figure of the obstacle. As shown in fig. 2, coordinate values of each vertex of the three-dimensional figure of the virtual obstacle are firstly obtained, projection transformation is carried out by applying internal parameters and external parameters of the camera, and accurate registration affine transformation calculated in one step is applied after the projection transformation, so that the position of a pixel of the three-dimensional figure of the obstacle is correctly displayed. And inquiring the longitude and latitude corresponding to the pixel position in the map picture according to the coordinates of the pixel position. Correcting longitude and latitude values of each fixed point of the three-dimensional obstacle graph according to the inquired longitude and latitude, keeping the height value unchanged, and outputting a corrected three-dimensional obstacle model. And then, the multi-mode video after image fusion and the corrected obstacle model are sent to an OpenGL state machine to perform unified graphics rendering, and a final virtual and real image graphics fusion picture of the comprehensive vision system is output.

Claims (10)

1. The utility model provides an airborne integrated vision system virtual-real fusion display method which is characterized by comprising the following steps:
1) Acquiring current airborne positioning information, corresponding map elevation data and an orthophoto, and combining flight attitude information to obtain a rough registration map picture consistent with a current frame of the multi-mode video sensor;
2) Taking the rough registration map picture as a floating image f and taking a current multi-mode video frame as a reference image f R Optimizing to obtain a corrected map picture with sub-pixel registration accuracy according to the established error energy function related to affine transformation between the floating image and the reference image;
3) Carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components, and carrying out image fusion on the brightness components and the multi-mode video;
4) Merging the fused brightness component with the chromaticity and saturation components of the color video frame, and outputting a fusion result;
5) And (3) superposing the established barrier model in the fused image picture in the step (4), and rendering and outputting a virtual-real fused comprehensive view picture.
2. The method for displaying virtual-real fusion of an airborne integrated vision system according to claim 1, wherein the step 1) is specifically: and inquiring and synthesizing a three-dimensional digital map database of the view based on airborne positioning information, obtaining map elevation data near an airport and a map orthographic image of a set range of forward viewing directions of the airplane, and performing perspective projection transformation on a map picture by combining flight attitude information and sensor internal parameters obtained by calibrating a multi-mode video sensor to obtain a rough registration map picture consistent with a current frame of the multi-mode video sensor.
3. The method for displaying the virtual-real fusion of the airborne integrated vision system according to claim 1, wherein in the step 1), the rough registration map picture is drawn according to a pinhole imaging principle in computer vision.
4. The method for displaying virtual-real fusion of an airborne integrated vision system according to claim 1, wherein the step 2) is specifically: adopting a Normalized Total Gradient (NTG) as an error energy function of affine transformation between the floating image and the reference image, and optimizing and solving the error energy function by an iteration method to obtain affine transformation parameters under the condition of minimum error energy function, namely, the geometric transformation relation between the floating image and the reference image; and finally, performing geometric transformation on the floating image to obtain a corrected map picture with sub-pixel registration accuracy.
5. The method for displaying the virtual-real fusion of the airborne integrated vision system according to claim 1, wherein in the step 3), the luminance component and the multi-mode video are subjected to image fusion, specifically, a fusion method based on saliency is adopted, and a laplace operator is applied pixel by pixel in a luminance component image of a color video frame and the multi-mode video frame, so as to obtain an initial saliency image; guiding and filtering the initial saliency image, and outputting a smooth saliency image; and taking the saliency image brightness value as a weighting weight, carrying out pixel-by-pixel weighted average on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
6. The method according to claim 1, wherein the obstacle model in step 5) is obtained by recalculating the positions of the obstacle points, lines, planes and texture coordinates in a three-dimensional space according to the geometric transformation relationship between the floating image and the reference image.
7. The method for displaying virtual-real fusion of an on-board integrated vision system according to claim 1, wherein the on-board positioning information includes longitude, latitude and altitude; the flight attitude information includes pitch, roll, yaw data.
8. An airborne integrated vision system virtual-real fusion display system, which is characterized by comprising:
the coarse registration module is used for acquiring current airborne positioning information, corresponding map elevation data and an orthographic image, and combining flight attitude information to obtain a coarse registration map picture consistent with the current frame of the multi-mode video sensor;
the fine registration module is used for taking the coarse registration map picture as a floating image f and taking a current multi-mode video frame as a reference image f R Optimizing to obtain a corrected map picture with sub-pixel registration accuracy according to the established error energy function related to affine transformation between the floating image and the reference image;
the brightness component fusion module is used for carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components, and carrying out image fusion on the brightness components and the multi-mode video;
the HSV component merging module is used for merging the fused brightness component with the chromaticity and saturation components of the color video frame and outputting a fusion result;
and the rendering output module is used for superposing the established barrier model in the fused image picture, and rendering and outputting the virtual-real fused comprehensive view picture.
9. The on-board integrated vision system virtual-real fusion display system of claim 8, wherein: the obstacle model is obtained by recalculating the positions and texture coordinates of obstacle points, lines and planes in a three-dimensional space by acquiring a corrected map picture obtained by the fine registration module.
10. An on-board device comprising a processor and a program memory, wherein the program stored in the program memory performs the virtual-real fusion display method of the on-board integrated vision system as claimed in claim 1 when loaded by the processor.
CN202010001858.5A 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system Active CN111145362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001858.5A CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001858.5A CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Publications (2)

Publication Number Publication Date
CN111145362A CN111145362A (en) 2020-05-12
CN111145362B true CN111145362B (en) 2023-05-09

Family

ID=70523266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001858.5A Active CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Country Status (1)

Country Link
CN (1) CN111145362B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112173141B (en) * 2020-09-25 2023-04-25 中国直升机设计研究所 Helicopter synthesized view display method
CN112381935A (en) * 2020-09-29 2021-02-19 西安应用光学研究所 Synthetic vision generation and multi-element fusion device
CN112419211B (en) * 2020-09-29 2024-02-02 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision
CN113703059B (en) * 2021-09-02 2023-11-17 中船海洋探测技术研究院有限公司 Remote magnetic detection method for water ferromagnetic target clusters
CN114820739B (en) * 2022-07-01 2022-10-11 浙江工商大学 Multispectral camera-oriented image rapid registration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103455982A (en) * 2013-08-31 2013-12-18 四川川大智胜软件股份有限公司 Airport scene monitoring vision enhancing method based on virtual-real fusion
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN105139451A (en) * 2015-08-10 2015-12-09 中国商用飞机有限责任公司北京民用飞机技术研究中心 HUD (head-up display) based synthetic vision guiding display system
CN109544696A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192229B (en) * 2020-01-02 2023-10-13 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video picture enhancement display method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103455982A (en) * 2013-08-31 2013-12-18 四川川大智胜软件股份有限公司 Airport scene monitoring vision enhancing method based on virtual-real fusion
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN105139451A (en) * 2015-08-10 2015-12-09 中国商用飞机有限责任公司北京民用飞机技术研究中心 HUD (head-up display) based synthetic vision guiding display system
CN109544696A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Yue Cheng etc.A prototype of Enhanced Synthetic Vision System using short-wave infrared.《2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC)》.2018,全文. *
刘长江 ; 张轶 ; 杨红雨 ; .基于虚实融合的低能见度下航拍图像地平线检测.《四川大学学报(工程科学版)》.2012,(第04期),全文. *
张仟新 ; 张钰鹏 ; .基于增强现实技术的飞行视景系统.《航空电子技术》.2016,(第01期),全文. *

Also Published As

Publication number Publication date
CN111145362A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111145362B (en) Virtual-real fusion display method and system for airborne comprehensive vision system
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
CN111448591B (en) System and method for locating a vehicle in poor lighting conditions
US7148861B2 (en) Systems and methods for providing enhanced vision imaging with decreased latency
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
Chiabrando et al. UAV and RPV systems for photogrammetric surveys in archaelogical areas: two tests in the Piedmont region (Italy)
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
US8818076B2 (en) System and method for cost-effective, high-fidelity 3D-modeling of large-scale urban environments
US10291898B2 (en) Method and apparatus for updating navigation map
US9679362B2 (en) System and method for generating textured map object images
CN107527328B (en) Unmanned aerial vehicle image geometric processing method considering precision and speed
CN112330582A (en) Unmanned aerial vehicle image and satellite remote sensing image fusion algorithm
CN111192229B (en) Airborne multi-mode video picture enhancement display method and system
US11380111B2 (en) Image colorization for vehicular camera images
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
CN111145260B (en) Vehicle-mounted-based double-target setting method
CN110103829B (en) Display method and device of vehicle-mounted display screen, vehicle-mounted display screen and vehicle
CN113240813B (en) Three-dimensional point cloud information determining method and device
Stilla et al. Texture mapping of 3d building models with oblique direct geo-referenced airborne IR image sequences
CN109840920A (en) It takes photo by plane object space information method for registering and aircraft spatial information display methods
CN113421325B (en) Three-dimensional reconstruction method for vehicle based on multi-sensor fusion
Cheng et al. Infrared Image Enhancement by Multi-Modal Sensor Fusion in Enhanced Synthetic Vision System
Cheng et al. A prototype of Enhanced Synthetic Vision System using short-wave infrared
US20240085186A1 (en) A method, software product, and system for determining a position and orientation in a 3d reconstruction of the earth's surface
CN113570720B (en) Unmanned plane video oil pipeline real-time display method and system based on gis technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant