CN111145362A - Virtual-real fusion display method and system for airborne comprehensive vision system - Google Patents

Virtual-real fusion display method and system for airborne comprehensive vision system Download PDF

Info

Publication number
CN111145362A
CN111145362A CN202010001858.5A CN202010001858A CN111145362A CN 111145362 A CN111145362 A CN 111145362A CN 202010001858 A CN202010001858 A CN 202010001858A CN 111145362 A CN111145362 A CN 111145362A
Authority
CN
China
Prior art keywords
image
fusion
virtual
airborne
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010001858.5A
Other languages
Chinese (zh)
Other versions
CN111145362B (en
Inventor
程岳
李亚晖
刘作龙
文鹏程
余冠锋
韩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010001858.5A priority Critical patent/CN111145362B/en
Publication of CN111145362A publication Critical patent/CN111145362A/en
Application granted granted Critical
Publication of CN111145362B publication Critical patent/CN111145362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of airborne graphic image processing, and relates to a virtual-real fusion display method and system for an airborne comprehensive vision system. According to the virtual-real fusion display method of the airborne comprehensive vision system, the real-time multi-mode video is matched with the sub-pixel precision of the synthetic vision map picture, so that on one hand, the accurate positions of the runway and the obstacle are accurately indicated, on the other hand, the comprehensive vision picture is enhanced in color and texture through image and graphic fusion, and then the obstacle three-dimensional graph is further subjected to geometric correction fusion, and finally, the organic fusion display of the multi-mode video and the synthetic vision map picture is realized. The invention can effectively improve the accurate perception of the spatial position forms of the airport runway and the obstacles by the pilot under the condition of low visibility, improve the situational awareness, reduce the typical accidents of controllable flight collision, runway invasion and the like in the process of approaching and landing, and improve the safety of the airplane.

Description

Virtual-real fusion display method and system for airborne comprehensive vision system
Technical Field
The invention belongs to the field of airborne graphic image processing, and relates to a virtual-real fusion display method of an airborne comprehensive vision system.
Background
A Composite Visual System (CVS) or an enhanced composite visual system (ESVS) integrates a multi-mode video and a three-dimensional digital map, and provides equivalent visual video picture information of an airport runway, dangerous terrain and obstacles for a pilot in a flight approach stage. The method integrates the advantages of large visual field, high resolution and true color of a Synthetic Vision System (SVS) and the advantages of real-time multi-mode (long-wave infrared, short-wave infrared and millimeter wave) video imaging penetrating through meteorological obstacles of an Enhanced Vision System (EVS). The virtual-real fused comprehensive vision real-time picture obviously improves the situational awareness of the pilot and enhances the flight safety.
In a traditional comprehensive view system, due to inherent errors of navigation parameters, sensor calibration and the like, a synthetic view picture and an enhanced view picture have a certain registration error, and in order to reduce perception obstacles caused by inconsistency of virtual and real pictures, two current solutions are provided, one is to adopt a picture-in-picture mode to display a multi-mode video picture in a synthetic view virtual picture by windowing, and ignore inconsistency of window edges. The other is to superimpose a simplified map wireframe (Framewire) graphic on the multimodal video frame to indicate the runway, obstacles and terrain.
Both the two modes inevitably have certain picture inconsistency, and the cognition and judgment of the pilot on the environment are interfered. In addition, the synthesized visual picture and the enhanced visual picture are not accurately registered and fused, and accurate key position indication of runways, obstacles and the like and identifiable detail information of real scene color texture and the like are lacked.
Disclosure of Invention
The invention provides a method for fusing and displaying virtual and real images of a comprehensive view, aiming at improving the virtual and real fusion accuracy of the comprehensive view and the readability of the images.
The technical scheme of the invention is as follows:
the virtual-real fusion display method of the airborne comprehensive vision system comprises the following steps:
1) acquiring current airborne positioning information, corresponding map elevation data and an ortho-image, and obtaining a coarse registration map picture consistent with a current frame of the multi-mode video sensor by combining flight attitude information;
2) taking the rough registration map picture as a floating image f, and taking the current multi-mode video frame as a reference image fROptimizing to obtain a corrected map picture of sub-pixel registration accuracy according to an established error energy function related to affine transformation between the floating image and the reference image;
3) performing HSV color space decomposition on the corrected map picture to obtain decomposed chrominance, saturation and luminance components, and performing image fusion on the luminance components and the multi-modal video;
4) merging the fused brightness component with the chroma and saturation components of the color video frame, and outputting a fusion result;
5) and (4) superposing the established barrier model in the image picture fused in the step 4), and rendering and outputting a virtual-real fused comprehensive visual picture.
Based on the above scheme, the invention further optimizes as follows:
optionally, step 1) is specifically: a three-dimensional digital map database of a synthetic view is inquired based on airborne positioning information, map elevation data near an airport and a map orthographic image of a set range of the forward-looking direction of an airplane are obtained, perspective projection transformation is carried out on a map picture (namely the picture rendered based on the map elevation data and the map orthographic image) by combining flight attitude information and sensor internal parameters (a principal point, a focal length and the like) obtained by calibration of a multi-mode video sensor, and a rough registration map picture consistent with a current frame of the multi-mode video sensor is obtained.
Optionally, in step 1), the coarse registration map picture is obtained by drawing according to a pinhole imaging principle in computer vision.
Optionally, step 2) is specifically: adopting a Normalized Total Gradient (NTG) as an error energy function of affine transformation between the floating image and the reference image, and optimally solving the error energy function through an iterative method to obtain affine transformation parameters under the condition of minimum error energy function, namely the geometric transformation relation between the floating image and the reference image; and finally, performing geometric transformation on the floating image to obtain a corrected map picture of sub-pixel registration accuracy.
Optionally, in step 3), performing image fusion on the luminance component and the multi-modal video, specifically, applying a laplacian operator pixel by pixel in the luminance component image of the color video frame and the multi-modal video frame by using a fusion method based on the saliency, to obtain an initial saliency image; then, after the initial saliency image is subjected to guide filtering, outputting a smooth saliency image; and taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted averaging on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
Optionally, the obstacle model in step 5) is obtained by recalculating obstacle points, lines, plane positions, and texture coordinates in a three-dimensional space according to a geometric transformation relationship between the floating image and the reference image (which is equivalent to the corrected map picture obtained in step 2).
The airborne positioning information comprises longitude, latitude and altitude; the flight attitude information includes pitch, roll, and yaw data.
Correspondingly, the invention also provides an airborne integrated visual system virtual-real fusion display system, which comprises:
the rough registration module is used for acquiring current airborne positioning information, corresponding map elevation data and an ortho image, and obtaining a rough registration map picture consistent with a current frame of the multi-mode video sensor by combining flight attitude information;
a fine registration module for taking the rough registration map picture as a floating image f and the current multi-modal video frame as a reference image fROptimizing to obtain a corrected map picture of sub-pixel registration accuracy according to an established error energy function related to affine transformation between the floating image and the reference image;
the brightness component fusion module is used for carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components and carrying out image fusion on the brightness components and the multi-mode video;
the HSV component merging module is used for merging the fused brightness component with the chroma and saturation components of the color video frame and outputting a fusion result;
and the rendering output module is used for superposing the established barrier model in the fused image picture, and rendering and outputting a virtual-real fused comprehensive visual picture.
Optionally, the obstacle model is obtained by recalculating obstacle points, lines, plane positions, and texture coordinates in a three-dimensional space by obtaining a corrected map picture obtained by the fine registration module.
Correspondingly, the invention also provides an airborne device which comprises a processor and a program memory, wherein the program stored in the program memory executes the virtual-real fusion display method of the airborne comprehensive vision system when being loaded by the processor.
The invention has the following advantages:
according to the invention, through the real-time multi-mode video and the three-dimensional map picture of the comprehensive view and the accurate registration and natural fusion of the obstacle model, the enhanced comprehensive view system display picture with real color and texture is output, the information of the runway, the dangerous terrain and the tall and big obstacles is accurately indicated, the space scene perception capability of a pilot on the runway and the obstacles of the airport under the condition of low visibility (including the conditions of haze, rain, snow, sand and night vision) can be improved, the typical accidents of controllable flight collision, runway intrusion and the like in the approach landing process of the airplane are further reduced, and the safety of the airplane is improved.
Drawings
Fig. 1 is a schematic flow chart of a virtual-real fusion display method of an airborne integrated vision system according to the present invention.
Fig. 2 is a schematic diagram of a geometric correction process of the obstacle model.
Detailed Description
The invention is further described in detail below with reference to the figures and examples.
In order to accurately indicate key positions of a runway, an obstacle and the like in an output picture of a comprehensive vision system and simultaneously truly display color and texture information of a scene, the virtual-real fusion display method of the airborne comprehensive vision system provided by the embodiment is mainly divided into two parts, namely map video registration and image graph fusion, as shown in fig. 1.
Firstly, a map video registration part:
firstly, a synthetic view three-dimensional digital map database is inquired by utilizing airborne GPS information (longitude, latitude and height), and map elevation data and an orthoimage in a certain range near an airport and in the forward-looking direction of an airplane are obtained. And (3) carrying out perspective projection transformation on the map picture by combining external parameters such as flight attitude information (pitching, rolling and yawing) and the like and internal parameters such as principal points, focal lengths and the like obtained by calibration of the multi-mode video sensor, and drawing a coarse registration map picture consistent with the current frame of the multi-mode video sensor according to a pinhole imaging principle in computer vision.
Secondly, the map picture and the comprehensive view real-time multi-mode video frame are considered to meet the two-dimensional planar affine transformation relation, the map picture is taken as a floating image f, and the multi-mode video frame is taken as a reference image fRAn error energy function is established for the affine transformation between the floating image and the reference image. In this embodiment, the Normalized Total Gradient (NTG) is used as an energy function of the affine transformation between the floating image and the reference image, i.e.
Figure BDA0002353786160000041
In the above formula, the first and second carbon atoms are,
Figure BDA0002353786160000042
is a gradient symbol | · |)1Is the norm of L1.
And (3) optimally solving the error energy function through an iterative method, and obtaining affine transformation parameters under the condition of the minimum error energy function, namely the geometric transformation from the floating image to the reference image.
And finally, carrying out geometric transformation on the floating image to obtain a corrected map picture of sub-pixel registration accuracy with the reference image.
Secondly, an image and graph fusion part:
firstly, HSV color space decomposition is carried out on a registered map picture to obtain decomposed chromaticity, saturation and brightness image components. Since the multimodal video frame has only a luminance component, the luminance component of the map picture is image-fused with the multimodal video. In the process of image fusion, a fusion method based on the significance is adopted. And extracting an initial value of the image significance by using a Laplacian operator, and outputting a fusion weight after performing smooth optimization on the initial significance image by using a guide filter. And further carrying out pixel-by-pixel weighted average on the map picture brightness component and the multi-modal image by utilizing saliency map weighting and outputting a brightness component fusion result. And finally, combining the fused brightness component and the original color video frame chrominance and saturation component, and outputting the video frame weighted and fused based on the image saliency.
Furthermore, according to the affine transformation relation between the accurately registered map picture and the multi-mode video plane, the positions of the barrier points, lines, planes and texture coordinates are recalculated in the three-dimensional space and are superposed in the fused image picture, finally, the fused multi-mode video frame and the corrected barrier model are uniformly rendered by utilizing a graphic engine such as OpenGL and the like, and a virtual-real fused comprehensive visual picture is output.
Due to the fact that the multi-mode video and the map texture in the comprehensive view are accurately registered, the planar runway picture and the three-dimensional obstacle model are accurately indicated. Meanwhile, the map picture is a clear image shot in sunny weather, so that the map picture has rich colors and textures and higher identification degree. Through the migration of color and texture, the comprehensive visual picture displayed by fusing virtual and real has better readability.
An application example is given below:
in the map video registration part, the longitude and latitude of the airplane at the current video frame time point are read through an airborne integrated navigation device, a three-dimensional digital map database is inquired, three-dimensional digital map elevation Data (DEM) and an orthographic image (DOM) within the same field angle of a forward-looking multi-mode camera and 10 kilometers in the forward-looking direction of the airplane at the current position are retrieved, the current longitude, latitude and altitude information and pitching, rolling and yawing information are converted into an earth-centered earth-fixed (ECEF) coordinate system, and a synthetic visual map picture under a virtual camera is calculated by combining relative rotation translation parameters between a multi-mode video sensor and a body coordinate system and internal parameters such as a principal point, a focal length and the like of the multi-mode video sensor.
Due to errors of navigation parameters and calibration data, a map picture generated by the virtual camera has a certain registration error with a real-time imaging video frame. Approximately considering that the map picture and the real-time video frame meet the affine transformation relation, performing iterative optimization by taking affine transformation parameters corresponding to the map picture and the multi-mode video frame as optimization objects and taking the normalized total gradient as error energy constructed by the optimization parameters, and obtaining accurate affine transformation parameters. Because the map picture and the multi-mode video frame are initially registered, the iterative optimization process can be quickly converged, and the accurate sub-pixel level registration parameters are output.
By applying the perspective transformation model and the accurate registration model, the pixel position of any pixel point of the multi-modal video frame in the map picture can be positioned. In order to prevent the data missing phenomenon such as 'blank', a corrected map picture is generated by adopting a reverse mapping interpolation method. And applying the geometric transformation to any point position on the multi-modal video, inquiring the pixel position in the map picture, and performing bilinear interpolation on the pixel position and the adjacent pixels to obtain the corresponding pixel value of the map picture. And finally, acquiring a precisely registered synthetic view virtual map picture through reverse mapping.
In the image and graphic fusion part, HSV chromaticity, saturation and brightness decomposition is applied to the registered synthetic visual virtual map picture, and a single-channel map picture brightness image and a single-channel multi-mode image are extracted and fused. And traversing the map picture brightness image and the multi-mode image, and obtaining a rough significance image by utilizing a Laplacian operator. And taking the map picture brightness image as a guide image, performing guide filtering on the significance image corresponding to the map picture brightness, and outputting the smoothed map picture brightness significance image. And taking the multi-modal video frame as a guide image, performing guide filtering on the multi-modal video frame saliency image, and outputting the smoothed multi-modal video frame saliency image. And then, carrying out normalization weight calculation on the saliency image corresponding to the smoothed map picture brightness image and the multimodal video frame saliency image, and obtaining a higher weight value by a pixel with a higher numerical value. And then, carrying out weighted average on the brightness image corresponding to the map picture and the multi-mode video frame image according to the pixel-by-pixel weight value to obtain a fused brightness image. And merging the fused brightness image, the map picture chromaticity image and the saturation image, and outputting a fused multi-mode video frame.
After the image fusion is completed, the three-dimensional figure of the obstacle is subjected to geometric correction fusion. As shown in fig. 2, coordinate values of each vertex of the three-dimensional graph of the virtual obstacle are obtained, projection transformation is performed by applying internal parameters and external parameters of the camera, and accurate registration affine transformation calculated in the previous step is applied after the projection transformation, so that the correct display pixel position of the three-dimensional graph of the obstacle is obtained. And inquiring the longitude and latitude corresponding to the pixel position in the map picture according to the coordinate of the pixel position. And correcting the longitude and latitude values of all fixed points of the three-dimensional graph of the obstacle according to the inquired longitude and latitude, keeping the height values unchanged, and outputting a corrected three-dimensional model of the obstacle. And then, sending the multi-mode video after image fusion and the corrected barrier model to an OpenGL state machine together for uniform graphics rendering, and outputting a final virtual and real image and graphics fusion picture of the comprehensive vision system.

Claims (10)

1. A virtual-real fusion display method of an airborne comprehensive vision system is characterized by comprising the following steps:
1) acquiring current airborne positioning information, corresponding map elevation data and an ortho-image, and obtaining a coarse registration map picture consistent with a current frame of the multi-mode video sensor by combining flight attitude information;
2) taking the rough registration map picture as a floating image f, and taking the current multi-mode video frame as a reference image fROptimizing to obtain correction of sub-pixel registration accuracy according to the established error energy function related to affine transformation between the floating image and the reference imageA rear map picture;
3) performing HSV color space decomposition on the corrected map picture to obtain decomposed chrominance, saturation and luminance components, and performing image fusion on the luminance components and the multi-modal video;
4) merging the fused brightness component with the chroma and saturation components of the color video frame, and outputting a fusion result;
5) and (4) superposing the established barrier model in the image picture fused in the step 4), and rendering and outputting a virtual-real fused comprehensive visual picture.
2. The virtual-real fusion display method of the airborne integrated vision system according to claim 1, wherein the step 1) is specifically: the method comprises the steps of inquiring and synthesizing a view three-dimensional digital map database based on airborne positioning information, obtaining map elevation data near an airport and a map orthographic image of a set range of the forward-looking direction of an airplane, carrying out perspective projection transformation on a map picture by combining flight attitude information and sensor internal parameters obtained by calibration of a multi-mode video sensor, and obtaining a rough registration map picture consistent with a current frame of the multi-mode video sensor.
3. The method for displaying the fusion of the virtual and the real of the airborne integrated vision system as claimed in claim 1, wherein in the step 1), the coarse registration map picture is drawn according to a pinhole imaging principle in computer vision.
4. The method for displaying the virtual-real fusion of the airborne integrated vision system according to claim 1, wherein the step 2) is specifically: adopting a Normalized Total Gradient (NTG) as an error energy function of affine transformation between the floating image and the reference image, and optimally solving the error energy function through an iterative method to obtain affine transformation parameters under the condition of minimum error energy function, namely the geometric transformation relation between the floating image and the reference image; and finally, performing geometric transformation on the floating image to obtain a corrected map picture of sub-pixel registration accuracy.
5. The method for displaying fusion of reality and virtues of an airborne integrated vision system according to claim 1, wherein in step 3), the luminance component and the multi-modal video are subjected to image fusion, specifically, a fusion method based on significance is adopted, and a laplacian operator is applied to the luminance component image of the color video frame and the multi-modal video frame pixel by pixel to obtain an initial significance image; then, after the initial saliency image is subjected to guide filtering, outputting a smooth saliency image; and taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted averaging on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
6. The method for fusion display of virtual and real images of an airborne integrated vision system as claimed in claim 1, wherein the obstacle model in step 5) is obtained by recalculating obstacle points, lines, plane positions and texture coordinates in a three-dimensional space according to the geometric transformation relationship between the floating image and the reference image.
7. The method for fusion display of virtual and real of an airborne integrated vision system as claimed in claim 1, wherein the airborne positioning information includes longitude, latitude and altitude; the flight attitude information includes pitch, roll, and yaw data.
8. An airborne integrated visual system virtual-real fusion display system is characterized by comprising:
the rough registration module is used for acquiring current airborne positioning information, corresponding map elevation data and an ortho image, and obtaining a rough registration map picture consistent with a current frame of the multi-mode video sensor by combining flight attitude information;
a fine registration module for taking the rough registration map picture as a floating image f and the current multi-modal video frame as a reference image fROptimizing to obtain a corrected map picture of sub-pixel registration accuracy according to an established error energy function related to affine transformation between the floating image and the reference image;
the brightness component fusion module is used for carrying out HSV color space decomposition on the corrected map picture to obtain decomposed chromaticity, saturation and brightness components and carrying out image fusion on the brightness components and the multi-mode video;
the HSV component merging module is used for merging the fused brightness component with the chroma and saturation components of the color video frame and outputting a fusion result;
and the rendering output module is used for superposing the established barrier model in the fused image picture, and rendering and outputting a virtual-real fused comprehensive visual picture.
9. The virtual-real fusion display system of the airborne integrated vision system of claim 8, wherein: the barrier model is obtained by recalculating barrier points, lines, surface positions and texture coordinates in a three-dimensional space by acquiring a corrected map picture obtained by the fine registration module.
10. An airborne device comprising a processor and a program memory, wherein the program stored in the program memory, when loaded by the processor, executes the virtual-real fusion display method of the airborne integrated vision system according to claim 1.
CN202010001858.5A 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system Active CN111145362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001858.5A CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001858.5A CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Publications (2)

Publication Number Publication Date
CN111145362A true CN111145362A (en) 2020-05-12
CN111145362B CN111145362B (en) 2023-05-09

Family

ID=70523266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001858.5A Active CN111145362B (en) 2020-01-02 2020-01-02 Virtual-real fusion display method and system for airborne comprehensive vision system

Country Status (1)

Country Link
CN (1) CN111145362B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112173141A (en) * 2020-09-25 2021-01-05 中国直升机设计研究所 Helicopter synthetic view display method
CN112381935A (en) * 2020-09-29 2021-02-19 西安应用光学研究所 Synthetic vision generation and multi-element fusion device
CN112419211A (en) * 2020-09-29 2021-02-26 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision
CN113703059A (en) * 2021-09-02 2021-11-26 中船海洋探测技术研究院有限公司 Remote magnetic detection method for water ferromagnetic target cluster
CN114820739A (en) * 2022-07-01 2022-07-29 浙江工商大学 Multispectral camera-oriented image rapid registration method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103455982A (en) * 2013-08-31 2013-12-18 四川川大智胜软件股份有限公司 Airport scene monitoring vision enhancing method based on virtual-real fusion
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN105139451A (en) * 2015-08-10 2015-12-09 中国商用飞机有限责任公司北京民用飞机技术研究中心 HUD (head-up display) based synthetic vision guiding display system
CN109544696A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination
CN111192229A (en) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video image enhancement display method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103455982A (en) * 2013-08-31 2013-12-18 四川川大智胜软件股份有限公司 Airport scene monitoring vision enhancing method based on virtual-real fusion
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN105139451A (en) * 2015-08-10 2015-12-09 中国商用飞机有限责任公司北京民用飞机技术研究中心 HUD (head-up display) based synthetic vision guiding display system
CN109544696A (en) * 2018-12-04 2019-03-29 中国航空工业集团公司西安航空计算技术研究所 A kind of airborne enhancing Synthetic vision actual situation Image Precision Registration of view-based access control model inertia combination
CN111192229A (en) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 Airborne multi-mode video image enhancement display method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUE CHENG ETC: "A prototype of Enhanced Synthetic Vision System using short-wave infrared" *
刘长江;张轶;杨红雨;: "基于虚实融合的低能见度下航拍图像地平线检测" *
张仟新;张钰鹏;: "基于增强现实技术的飞行视景系统" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112173141A (en) * 2020-09-25 2021-01-05 中国直升机设计研究所 Helicopter synthetic view display method
CN112173141B (en) * 2020-09-25 2023-04-25 中国直升机设计研究所 Helicopter synthesized view display method
CN112381935A (en) * 2020-09-29 2021-02-19 西安应用光学研究所 Synthetic vision generation and multi-element fusion device
CN112419211A (en) * 2020-09-29 2021-02-26 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision
CN112419211B (en) * 2020-09-29 2024-02-02 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision
CN113703059A (en) * 2021-09-02 2021-11-26 中船海洋探测技术研究院有限公司 Remote magnetic detection method for water ferromagnetic target cluster
CN113703059B (en) * 2021-09-02 2023-11-17 中船海洋探测技术研究院有限公司 Remote magnetic detection method for water ferromagnetic target clusters
CN114820739A (en) * 2022-07-01 2022-07-29 浙江工商大学 Multispectral camera-oriented image rapid registration method and device
CN114820739B (en) * 2022-07-01 2022-10-11 浙江工商大学 Multispectral camera-oriented image rapid registration method and device

Also Published As

Publication number Publication date
CN111145362B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN111145362B (en) Virtual-real fusion display method and system for airborne comprehensive vision system
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
EP2057585B1 (en) Mosaic oblique images and methods of making and using same
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN111448591A (en) System and method for locating a vehicle in poor lighting conditions
CN107527328B (en) Unmanned aerial vehicle image geometric processing method considering precision and speed
CN110926474A (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
EP1453010A2 (en) Systems and methods for providing enhanced vision imaging with decreased latency
CN109255808B (en) Building texture extraction method and device based on oblique images
JP2003519421A (en) Method for processing passive volume image of arbitrary aspect
EP2686827A1 (en) 3d streets
CN113222820B (en) Pose information-assisted aerial remote sensing image stitching method
CN113240813B (en) Three-dimensional point cloud information determining method and device
CN114339185A (en) Image colorization for vehicle camera images
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN112330582A (en) Unmanned aerial vehicle image and satellite remote sensing image fusion algorithm
CN113296133B (en) Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology
CN111192229B (en) Airborne multi-mode video picture enhancement display method and system
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
KR20130034528A (en) Position measuring method for street facility
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
CN117611438B (en) Monocular image-based reconstruction method from 2D lane line to 3D lane line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant