CN114764850A - Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology - Google Patents

Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology Download PDF

Info

Publication number
CN114764850A
CN114764850A CN202210311046.XA CN202210311046A CN114764850A CN 114764850 A CN114764850 A CN 114764850A CN 202210311046 A CN202210311046 A CN 202210311046A CN 114764850 A CN114764850 A CN 114764850A
Authority
CN
China
Prior art keywords
image
vst
virtual
simulation
semi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210311046.XA
Other languages
Chinese (zh)
Inventor
王志乐
严小天
周秀芝
孙忠云
郭秋华
崔益鹏
刘鲁峰
王惠青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Junyi Technology Information Co ltd
Qingdao Campus of Naval Aviation University of PLA
Original Assignee
Nanjing Junyi Technology Information Co ltd
Qingdao Campus of Naval Aviation University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Junyi Technology Information Co ltd, Qingdao Campus of Naval Aviation University of PLA filed Critical Nanjing Junyi Technology Information Co ltd
Priority to CN202210311046.XA priority Critical patent/CN114764850A/en
Publication of CN114764850A publication Critical patent/CN114764850A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual-real fusion simulation system of a semi-physical simulation cabin based on VST technology, which comprises: the aircraft simulation system comprises an avionics system and a flight control subsystem; the avionic system is used for generating avionic data and sending the avionic data to the VST visual system; the flight control subsystem is used for generating flight control data and sending the flight control data to the VST vision system; the camera is used for acquiring video stream images of the semi-physical simulation cabin and the field green screen scene and sending the video stream images to the VST visual system; the VST visual system receives data sent by the aircraft simulation system and the camera, and fuses the virtual cockpit image and the virtual scene image to generate a virtual-real fused image; and the head display device receives the virtual-real fused image and displays and outputs the virtual-real fused image. According to the simulation system, the video stream images of the semi-physical simulation cabin and the scene of the scene green screen are fused with the virtual scene image, and the generated virtual and real fused image can avoid the problems that the virtual object presents a semitransparent state and the like in the existing optical perspective technology.

Description

Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology
Technical Field
The invention belongs to the field of simulator visual simulation, and particularly relates to a virtual-real fusion simulation system of a semi-physical simulation cabin based on VST technology.
Background
The existing semi-physical simulation virtual-real fusion simulation system adopts an optical perspective technology, and because an optical lens group in front of an optical perspective head-mounted display allows light rays from real and virtual environments to pass through simultaneously, a virtual object generated by a computer cannot completely shield the object in the real environment, so that the registered virtual object presents a semitransparent state, the sense of reality of fusion of a real scene and a virtual scene is destroyed, and depth information provided for people due to shielding is damaged.
Optical systems typically have distortion, the magnitude of which is proportional to the square of the radius of the image point from the center of the optical axis. To develop an augmented reality system with a wide field of view, the distortion problem of the optical system must be considered, and the video see-through head-mounted display can correct the distortion of the optical system by means of image processing.
For the optical perspective head-mounted display, a user can see the scene of the real world in real time, and the virtual scene can be added on the real scene after a series of time delays, so that the dynamic registration error of the system is caused, and the enhanced information generates a 'wandering' phenomenon in the real scene. And the video perspective head-mounted display can match the system time consumed by shooting the real scene with the system delay time for describing the virtual scene so as to reduce the dynamic registration error and improve the registration precision.
The variation range of the illumination intensity in the nature is wide, but the variation range of the display light intensity cannot be matched with the variation range of the illumination intensity in the nature. The optical see-through head-mounted display has the disadvantage that the light intensity of the real scene and the virtual scene cannot be matched optimally. Video see-through head-mounted displays do not have this problem. As the scene, whether real or virtual, is displayed on the head-mounted display.
Disclosure of Invention
The invention provides a virtual-real fusion simulation system of a semi-physical simulation cabin based on a VST (virtual transform) technology, which aims at the technical problems of inflexibility of synthetic scenes, distortion, high difficulty in light intensity matching and the like of the semi-physical simulation virtual-real fusion simulation system adopting an optical perspective technology in the prior art and can solve the problems.
In order to realize the purpose of the invention, the invention is realized by adopting the following technical scheme:
a virtual-real fusion simulation system of a semi-physical simulation cabin based on VST technology comprises:
the aircraft simulation system comprises an avionics system and a flight control subsystem;
the avionic system is used for generating avionic data and sending the avionic data to the VST visual system;
the flight control subsystem is used for generating flight control data and sending the flight control data to the VST vision system;
the camera is used for acquiring video stream images of the semi-physical simulation cabin and the field green screen environment and sending the video stream images to the VST visual system;
the VST vision system receives data sent by the aircraft simulation system and the camera, processes video stream images of a semi-physical simulation cockpit and a field green screen environment sent by the camera to obtain a semi-physical simulation cockpit image, and fuses the semi-physical simulation cockpit image and a virtual scene image to generate a virtual-real fused image;
and the head display equipment receives the virtual and real fused image and displays and outputs the image.
In some embodiments of the present invention, the avionics system sends the visual data to be displayed by the instrument panel to the VST vision system in a shared memory manner, and the VST vision system loads the visual data to the position corresponding to the instrument panel in the virtual scene image;
the visual data includes avionics images driven by the simulation data output by the avionics system, including frames displayed in real time by the MFD and the HUD.
In some embodiments of the present invention, the avionics subsystem and the flight control subsystem send control command data to the VST vision system through UDP network communication, and the VST vision system receives and executes the control command, and changes the control command capable of changing the visual state at a corresponding position in the virtual cockpit image.
In some embodiments of the present invention, the VST vision system further comprises a UDP network communication unit configured to send the operation data generated by the execution of the control command to the aircraft simulation system;
and the aircraft simulation system drives an instrument panel of the semi-physical simulation cockpit to display and output according to the operation data.
In some embodiments of the invention, the control instructions capable of changing the visual state comprise illuminating or turning off an indicator light, or changing a display color of an indicator light.
In some embodiments of the present invention, the VST viewing system receives video stream images of a semi-physical simulation cabin and a live green screen scene sent by the camera, and performs the following processing:
reading an image, carrying out format conversion, and converting the image into data in an HSV format;
and sequentially carrying out segmentation, morphological transformation, image corrosion and Gaussian blur processing on the HSV-format image to obtain a semi-physical simulation cockpit image.
In some embodiments of the present invention, before the VST viewing system processes the video stream images of the semi-physical cabin and the on-site green-screen scene, the VST viewing system further includes a step of matting to obtain an image including only the physical cabin.
In some embodiments of the present invention, after the VST vision system fuses the real cabin preprocessed image and the virtual scene image, the VST vision system further converts the fused image into a utfuture 2D resource file, creates a dynamic material, and loads the utfuture 2D resource file to generate a panoramic image.
In some embodiments of the present invention, the camera is a binocular camera.
In some embodiments of the invention, the camera is fixed at the front end of the head-mounted display device.
Compared with the prior art, the invention has the advantages and positive effects that:
the virtual-real fusion simulation system for the semi-physical simulation cabin is based on a video perspective technology, video stream images of the semi-physical aircraft cabin and a field green screen scene are collected through a camera, the video stream images of the physical cabin are fused with virtual scene images to generate virtual-real fused images, light rays in the natural world are prevented from being directly transmitted, and the problems that a virtual object presents a semitransparent state and the like in the existing optical perspective technology can be solved.
The video perspective technology can avoid the problem that the variation range of the illumination intensity of the nature is large in span, but the variation range of the light intensity of the display cannot be matched with the variation range of the illumination intensity of the nature. The method converts real scenes or virtual scenes into digital images, and displays and outputs the digital images on a head-mounted display after fusion processing.
Other features and advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic block diagram of an embodiment of a virtual-real fusion simulation system of a semi-physical simulation cabin based on VST technology;
fig. 2 is a block diagram illustrating an image processing principle of an embodiment of a virtual-real fusion simulation system for a semi-physical simulation cabin based on VST technology according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the present invention, the terms of direction or positional relationship indicated by the terms "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Example one
The principle of the VST (Video See Through, referred to as VST) vision system is that two micro CCD cameras installed on a user head-mounted display capture images of an external real environment, a computer superimposes information or image signals to be added on Video signals of the cameras Through calculation processing, a virtual scene generated by the computer is fused with the real scene Through a Video signal fusion device, and finally the virtual scene is presented to a user Through a display system similar to an immersed head-mounted display.
The embodiment provides a virtual-real fusion simulation system of a semi-physical simulation cabin based on a VST (virtual transform) technology, which is used for carrying out fusion simulation on a real cabin and a virtual scene by applying the VST technology to virtual-real fusion of the simulation cabin and using the fusion simulation as flight training. And still adopting an image acquisition mode for the real cockpit, carrying out image processing on the image of the real cockpit, fusing the image with the image of the virtual scene, and finally displaying the fused image through a head display device.
As shown in fig. 1, the virtual-real fusion simulation system of the semi-physical simulation cabin based on the VST technology of the present embodiment includes an aircraft simulation system, a camera, and a VST view system.
The aircraft simulation system comprises an avionics system and a flight control subsystem.
The avionics system is used for generating avionics data and sending the avionics data to the VST visual system. And the flight control subsystem is used for generating flight control data and sending the flight control data to the VST vision system. The camera is used for collecting video stream images of the semi-physical simulation cabin and the field green screen scene and sending the video stream images to the VST vision system. The VST vision system receives data sent by the airplane simulation system and the camera, processes video stream images of the semi-physical simulation cockpit and the field green screen environment sent by the camera to obtain a semi-physical simulation cockpit image, and fuses the semi-physical simulation cockpit image and the virtual scene image to generate a virtual-real fused image. And the head display equipment receives the virtual and real fused image and displays and outputs the image.
The virtual-real fusion simulation system for the semi-physical simulation cabin is based on a video perspective technology, video stream images of the semi-physical simulation cabin and a field green screen scene are collected through the camera, the image of the semi-physical simulation cabin is fused with the image of a virtual scene to generate a virtual-real fused image, light rays in the nature are prevented from being directly transmitted through, and the problems that a virtual object presents a semitransparent state and the like in the existing optical perspective technology can be avoided. The video perspective technology can avoid the problem that the variation range of the illumination intensity of the nature is large in span and the variation range of the light intensity of the display cannot be matched with the variation range of the illumination intensity of the nature. The method converts real scenes or virtual scenes into digital images, and displays and outputs the digital images on a head-mounted display after fusion processing.
Because the camera collects video stream images of the semi-physical simulation cabin and a field green screen scene, in order to avoid background information around the physical cabin from appearing in the simulation image, the VST vision system also comprises a step of image matting before processing the video stream image of the semi-physical simulation cabin, and an image only containing the semi-physical simulation cabin is obtained.
In order to conveniently scratch the video streaming images of the semi-physical simulation cockpit, the method also comprises field design, and according to the overall requirements of the airplane simulation training system and the technical characteristics of VST (video perspective), and in order to conveniently construct a virtual scene, the scheme coats the ground, the top and the peripheral related wall surfaces of a real field used by a certain airplane simulation training system into green to complete the scene construction of a green curtain environment. The background of the collected image is set to be green, so that the video stream image of the semi-physical simulation cabin can be separated from the background image simply and quickly.
Because the background color of the instrument panel of the semi-physical simulation cabin is close to the green curtain which is built, the instrument panel in the physical cabin image is easily scratched by mistake during the scratching processing. Therefore, in the scheme, the visual data which needs to be displayed through the instrument panel is independently sent to the VST visual system through the avionic subsystem, the VST visual system directly generates a corresponding instrument image according to the visual data, and the instrument image is loaded to the position corresponding to the instrument panel in the virtual scene image, so that the fused image is a complete simulation cockpit image. The technical problem that instrument images in real cabin images are scratched and processed is solved.
The simulation system of this embodiment employs a video see-through head-mounted display, which can match the system time consumed by capturing a real scene with the system delay time for depicting a virtual scene, so as to reduce dynamic registration errors and improve registration accuracy.
In some embodiments of the present invention, the avionics system sends the visual data to be displayed by the instrument panel to the VST vision system in a shared memory manner, and the VST vision system loads the visual data to the corresponding instrument panel in the virtual cockpit image at the position corresponding to the instrument panel in the virtual scene image for display.
The characteristics of the visual data refer to an avionic display image driven by simulation data output by an avionic system (such as navigation, radar, thunderstorm, atmosphere and the like), wherein the avionic display image comprises MFD and HUD real-time display pictures, and the visual image data are loaded to the position of an instrument panel in a VST virtual cockpit image in real time to form the real-time display avionic display image.
Since airborne and simulated avionics display screens are developed by the OpenGL technology, the code program cannot be directly used in the VST viewing environment, and therefore the data can only be sent by way of a network and a shared memory. The avionic visual data is sent in a network mode, simulation driving data needs to be analyzed, and images are driven and displayed in real time, so that VST visual scene developers need to master professional knowledge of the avionic simulation data, and airborne avionic simulation data cannot be submitted to the VST visual scene developers generally due to secret reasons; or the real-time driven avionic display image is directly transmitted through the network, but the image data of the visual display has high real-time performance and large data volume, which are all in millisecond level, so the network transmission efficiency cannot meet the real-time requirement.
The method completely and independently separates the driving data of the avionics system from VST visual developers, ensures the safety of the driving data, reduces the workload of the VST visual developers, and has the real-time property far higher than the real-time property of network transmission.
In some embodiments of the present invention, the avionics subsystem and the flight control subsystem send control instruction data to the VST vision system through UDP network communication, and the VST vision system receives and executes the control instruction, and changes the control instruction capable of changing the visual state at a corresponding position in the virtual cockpit image.
Because the control instruction data are integer data, the data instruction is generally within the range of dozens of bytes, the data volume is extremely small, the real-time performance of the data can be ensured by adopting a UDP network communication mode, once a network fault occurs, the recovery network communication can timely recover the communication of the UDP control instruction data, and the VST visual state and the control instruction data state are timely ensured to be consistent.
The control command data includes manipulation data transmitted by the physical cockpit. The avionics system calculates the operation data, the obtained HUD and MFD data are visual data displayed through an instrument panel, and the HUD and MFD data are transmitted to a virtual cockpit of the VST visual system in a memory sharing mode.
The HUD is a head-up display system, and is an optical instrument panel that performs blind operation and head-up display with the driver as the center.
The MFD refers to a multi-function display.
UDP (User Datagram Protocol) and the Internet Protocol suite support a connectionless transport Protocol.
In some embodiments of the invention, the VST vision system further comprises means for sending the operational data generated by executing the control instructions to the aircraft simulation system via UDP network communication.
The aircraft simulation system drives the instrument panel in the semi-physical simulation cockpit to display and output according to the operation data, and the consistency of the semi-physical simulation cockpit and the virtual aircraft cockpit is ensured.
In addition to the instrument panel for displaying flight data, indicator lights are provided in the aircraft cabin for indicating different operating states. In some embodiments of the invention, the control instructions capable of changing the visual state comprise illuminating or turning off an indicator light, or changing a display color of an indicator light. The VST vision system receives and executes the control instruction of the lighting state of the indicator lamp, generates a new state diagram at a corresponding position in the virtual cockpit image, and displays and outputs the new state diagram.
Optical systems typically have distortion, the magnitude of which is proportional to the square of the radius of the image point from the center of the optical axis. To develop an augmented reality system with a wide field of view, distortion of the optical system must be considered.
In some embodiments of the present invention, as shown in fig. 2, the VST view system receives the video stream images of the semi-physical simulation cabin and the live green screen scene sent by the camera, and performs the following processing:
reading the image, converting the format of the image into HSV-format data.
And sequentially carrying out segmentation, morphological transformation, image corrosion and Gaussian blur processing on the HSV-format image to obtain a semi-physical simulation cockpit image.
The video perspective head-mounted display can correct the distortion of the optical system by an image processing method.
In some embodiments of the invention, the video perspective function of the VST viewing system is achieved by fusing images of a real world and a virtual world based on a correlation algorithm. The basic implementation principle of the method in the non-real Engine 4 is shown in fig. 2, and after a VST vision system fuses a real cabin preprocessed image and a virtual scene image, the VST vision system further converts the fused image into a utfuture 2D resource file, creates a dynamic material, and loads the utfuture 2D resource file to generate a panoramic image.
In some embodiments of the present invention, the camera is a binocular camera.
The head-mounted display of the virtual reality mainly has the function of sealing the vision and the hearing of people to the outside by utilizing the head-mounted display equipment and guiding a user to generate a feeling of being in a virtual environment. The display principle is that the left and right eye screens respectively display images of the left and right eyes, and the human eyes generate stereoscopic impression in the brain after acquiring the information with the difference.
The human eye is easy to locate the distance to an object, but when the person closes one of the eyes, the locating ability is greatly reduced. The binocular camera simulates the application of human eyes by using the bionics principle, obtains image information in a real scene of synchronous exposure through the calibrated two cameras, and then calculates the third-dimensional depth information of the acquired 2-dimensional image pixel points, so as to achieve the purpose of acquiring the real space depth image.
In some embodiments of the invention, the camera is fixed at the front end of the head-mounted display device. After the user wears the head-mounted display equipment, the camera faces towards the front of the user and is used for collecting images of a space in front, the function of human eyes is replaced, and the user experience immersion feeling is enhanced.
In the embodiment, the design architecture and the video perspective function of the software part of the visual simulation platform of the simulator based on the VST technology are realized in an unknown Engine 4.
The basic implementation flow of the VST technology in the non Engine 4 is as follows:
(1) c + + null engineering is created based on the UE4 engine and a pan blueprint class is created that represents the role.
(2) And entering a Brown blueprint class, adding a Camera component below a root component of the Brown blueprint class, and capturing the scene in the virtual scene by a user.
(3) The ChildActor component is added in the positive direction of the X axis of the Camera component of the paw blueprint class (i.e., directly in front of the Camera component) at the appropriate distance, and is used to load the 3D UMG component.
(4) And creating a camera instance in an engine of the UE4 based on a VideoCapture class of OpenCV, and judging whether the binocular camera is successfully started based on an IsOpened function in the VideoCapture class.
(5) Reading the image data of each frame captured by the binocular camera in the UE4 engine based on the read function in the VideoCapture class.
(6) The captured images per frame are converted to HSV format image data based on the cvtColor function in the UE4 engine.
(7) The HSV formatted image data is subjected to an image segmentation operation based on an inRange function in the UE4 engine. Morphological transformation of the image data is performed based on a morphologoex function. Gaussian blur processing of the image data is performed based on a GaussianBlur function.
(8) In the UE4 engine, the image data is fused based on the Replace _ and _ band function.
(9) In the UE4 engine, the image data fused by Replace _ and _ band function is converted into image data in the format of utfuture 2D, which is used to create dynamic textures, whose main function is to receive image data that changes in real time.
(10) In the UE4 engine, creating a Widget blueprint class, opening and entering the Widget blueprint class, and adding an Image component in a Canvas Panel component of a Designer interface of the Widget blueprint class.
(11) A dynamic texture (i.e., dynamic Material) class is created in the UE4 engine and is used to attach the dynamic texture instance to the Image component of the Widget blueprint class after the dynamic texture instance is created.
(12) In a UE4 engine, creating an Actor blueprint class, opening and entering the Actor blueprint class, adding a Widget component under a root node of the Actor blueprint class, and loading the previously created Widget blueprint class in the related attributes of the Widget component, thereby completing the production of the 3D UMG.
(13) And loading the 12-step created Actor blueprint class under the relevant attributes of the child Actor component of the palm blueprint class, so that the 3D UMG is presented right in front of the Camera component, and running the program, so that the Camera captures an image after green screen image presented on the 3D UMG in real time and transmits the image to the VR head display. So that the experiencer will see a VST panoramic view.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A virtual-real fusion simulation system of a semi-physical simulation cabin based on VST technology is characterized by comprising:
the aircraft simulation system comprises an avionics system and a flight control subsystem;
the avionic system is used for generating avionic data and sending the avionic data to the VST visual system;
the flight control subsystem is used for generating flight control data and sending the flight control data to the VST vision system;
the camera is used for acquiring video stream images of a semi-physical simulation cabin and a field green screen environment and sending the video stream images to the VST visual system;
the VST visual system receives data sent by the airplane simulation system and the camera, processes video stream images of a semi-physical simulation cockpit and a field green screen environment sent by the camera to obtain a semi-physical simulation cockpit image, and fuses the semi-physical simulation cockpit image and a virtual scene image to generate a virtual-real fused image;
and the head display equipment receives the virtual-real fused image and displays and outputs the virtual-real fused image.
2. The simulation system of claim 1, wherein the avionics system sends visual data to be displayed by an instrument panel to the VST vision system in a shared memory manner, and the VST vision system loads the visual data to a position corresponding to the instrument panel in a virtual scene image;
the visual data includes avionic display images driven by the simulation data output by the avionics system, including frames displayed in real time by the MFD and the HUD.
3. The simulation system of claim 1, wherein the avionics subsystem and flight control subsystem send control command data to the VST vision system via UDP network communication, and the VST vision system receives and executes control commands that change at corresponding locations in the virtual cockpit image for control commands that can change visual states.
4. The simulation system of claim 3, wherein the VST vision system further comprises means for sending operational data generated by executing the control instructions to the aircraft simulation system via UDP network communication;
and the aircraft simulation system drives an instrument panel in the semi-physical simulation cockpit to display and output according to the operation data.
5. The simulation system of claim 3, wherein the control instructions capable of changing the visual state comprise illuminating or turning off an indicator light, or changing a display color of an indicator light.
6. The simulation system of claim 1, wherein the VST vision system receives video stream images of semi-physical cabin and live green screen scenes from the camera and performs the following processes:
reading an image, carrying out format conversion, and converting the image into data in an HSV format;
and sequentially carrying out segmentation, morphological transformation, image corrosion and Gaussian blur processing on the HSV-format image to obtain a semi-physical simulation cockpit image.
7. The simulation system of claim 6, wherein the VST vision system further comprises a step of matting to obtain an image containing only the semi-physical cabin before processing the video stream images of the semi-physical cabin and the live green scene.
8. The simulation system of claim 6, wherein the VST vision system, after fusing the real cabin pre-processed image with the virtual scene image, further comprises converting the fused image into a UTfuture 2D resource file, creating dynamic material, and loading the UTfuture 2D resource file to generate a panoramic image.
9. The simulation system of any of claims 1-8, wherein the camera is a binocular camera.
10. The simulation system of any of claims 1-8, wherein the camera is fixed to a front end of the head mounted display device.
CN202210311046.XA 2022-03-28 2022-03-28 Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology Pending CN114764850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210311046.XA CN114764850A (en) 2022-03-28 2022-03-28 Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210311046.XA CN114764850A (en) 2022-03-28 2022-03-28 Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology

Publications (1)

Publication Number Publication Date
CN114764850A true CN114764850A (en) 2022-07-19

Family

ID=82365119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210311046.XA Pending CN114764850A (en) 2022-03-28 2022-03-28 Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology

Country Status (1)

Country Link
CN (1) CN114764850A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117082225A (en) * 2023-10-12 2023-11-17 山东海量信息技术研究院 Virtual delay video generation method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117082225A (en) * 2023-10-12 2023-11-17 山东海量信息技术研究院 Virtual delay video generation method, device, equipment and storage medium
CN117082225B (en) * 2023-10-12 2024-02-09 山东海量信息技术研究院 Virtual delay video generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2017221461A1 (en) System, etc., for creating mixed reality environment
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
JP7201869B1 (en) Generate new frames with rendered and unrendered content from the previous eye
CN107154197A (en) Immersion flight simulator
WO2020219744A1 (en) Perimeter estimation from posed monocular video
CN112416125A (en) VR head-mounted all-in-one machine
CN111275015A (en) Unmanned aerial vehicle-based power line inspection electric tower detection and identification method and system
CN107134194A (en) Immersion vehicle simulator
CN110062916A (en) For simulating the visual simulation system of the operation of moveable platform
CN113035010A (en) Virtual and real scene combined visual system and flight simulation device
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
US11579449B2 (en) Systems and methods for providing mixed-reality experiences under low light conditions
CN113031462A (en) Port machine inspection route planning system and method for unmanned aerial vehicle
CN114764850A (en) Virtual-real fusion simulation system of semi-physical simulation cabin based on VST technology
CN111476907A (en) Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology
CN114283243A (en) Data processing method and device, computer equipment and storage medium
CN113132708A (en) Method and apparatus for acquiring three-dimensional scene image using fisheye camera, device and medium
CN117278734A (en) Rocket launching immersive viewing system
CN116205980A (en) Method and device for positioning and tracking virtual reality in mobile space
CN115496884A (en) Virtual and real cabin fusion method based on SRWorks video perspective technology
CN114463520A (en) Method and device for realizing Virtual Reality (VR) roaming
CN112985361A (en) Phase-control-free live-action three-dimensional modeling and surveying method and system based on unmanned aerial vehicle
Walko et al. Integration and use of an augmented reality display in a maritime helicopter simulator
CN116030228B (en) Method and device for displaying mr virtual picture based on web
Walko et al. Integration and use of an AR display in a maritime helicopter simulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination