CN116527975A - MR fusion display method, fusion system and civil aircraft cockpit fusion system - Google Patents

MR fusion display method, fusion system and civil aircraft cockpit fusion system Download PDF

Info

Publication number
CN116527975A
CN116527975A CN202310331199.5A CN202310331199A CN116527975A CN 116527975 A CN116527975 A CN 116527975A CN 202310331199 A CN202310331199 A CN 202310331199A CN 116527975 A CN116527975 A CN 116527975A
Authority
CN
China
Prior art keywords
virtual
camera
entity
scene
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310331199.5A
Other languages
Chinese (zh)
Inventor
吕毅
薛阳
王大伟
武玉芬
许澍虹
杨志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Original Assignee
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commercial Aircraft Corp of China Ltd, Beijing Aeronautic Science and Technology Research Institute of COMAC filed Critical Commercial Aircraft Corp of China Ltd
Priority to CN202310331199.5A priority Critical patent/CN116527975A/en
Publication of CN116527975A publication Critical patent/CN116527975A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of digital simulation, in particular to an MR fusion display method, a fusion system and a civil aircraft cockpit fusion system. According to the MR fusion display method provided by the embodiment of the application, the virtual camera identical to the entity camera is constructed according to the preset target parameters of the entity camera; the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene; extracting a virtual target contour area, and extracting a corresponding entity area image in an entity scene shot by an entity camera according to the virtual target contour area; and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion. The method can rapidly extract the pixel information of the specified object in the real scene of the entity and perform fusion display with the three-dimensional rendering image in the virtual scene, so that the simulation degree of the mixed reality simulation is improved. The fusion system and the civil aircraft cockpit fusion system provided by the application are both based on the MR fusion display method, so that the fusion system and the civil aircraft cockpit fusion system have the technical effects.

Description

MR fusion display method, fusion system and civil aircraft cockpit fusion system
Technical Field
The application relates to the technical field of digital simulation, in particular to an MR fusion display method, a fusion system and a civil aircraft cockpit fusion system.
Background
Mixed Reality (MR for short) is a new visual environment generated by combining the real world and the virtual world, and can realize coexistence of physical and digital objects and real-time interaction. The technical means has the advantages of being virtual and real, and compared with pure virtual simulation, partial key entity scenes can be fused into a simulation environment to increase the sense of reality of the simulation; compared to pure physical simulation, virtual scenes can be used to replace a large number of physical objects to reduce costs and increase variability.
With the development of computer software and hardware technology, the MR technology is gradually paid attention to the fields of aviation, aerospace, high-speed rail, ships, automobiles and the like, and application research and popularization are started in the aspects of product design evaluation, complex large-component manufacturing and assembly, operation training, maintenance guarantee and the like. The core element of the mixed reality is the fusion display of the virtual world and the real world, specifically, adding the picture of the real scene into the virtual scene, and breaking the boundary between the virtual and the real.
In the related art, aiming at the problem of virtual-real fusion, a general scheme is that a camera is arranged on a display unit and is close to the positions of two eyes, so as to collect a real physical scene under the visual angle of human eyes, further, the image recognition technology is utilized to extract target content in a shot picture, and finally, the partial content is overlapped into a rendering scene to realize the fusion between the virtual and the real.
However, the above method has no much requirement on the scene object itself, and can intercept a dynamic or static scene object, which has a difficulty in accurately identifying a target object and accurately extracting the pixels occupied by the target object in an image. At present, the mainstream recognition and extraction method adopts an artificial intelligence mode, a large number of sample training needs to be carried out aiming at a single target, and nevertheless, the problems that recognition cannot be carried out and error extraction cannot be avoided are difficult to avoid, and the effect is poor in practical application. In addition, since the simple image processing lacks depth information, it is difficult to deal with the problem of occlusion of scene objects during fusion.
Disclosure of Invention
The application provides an MR fusion display method, a fusion system and a civil aircraft cockpit fusion system, which can effectively solve the above or other potential technical problems.
A first aspect of the present application provides an MR fusion display method, including:
calibrating the entity camera to obtain preset target parameters of the entity camera;
constructing a virtual camera identical to the entity camera according to preset target parameters of the entity camera;
performing pose matching on the entity camera and the virtual camera so as to enable the positions of optical centers of the entity camera and the virtual camera and the shooting directions of the cameras to be consistent in real time in the moving process;
selecting a target object to be matched in the entity scene, and marking in the virtual scene;
the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene;
extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene;
extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region;
and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion.
According to the MR fusion display method provided by the embodiment of the application, the virtual camera identical to the entity camera is constructed according to the preset target parameters of the entity camera; performing pose matching on the entity camera and the virtual camera; selecting a target object to be matched in the entity scene, and marking in the virtual scene; the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene; extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region; and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion. By the arrangement, pixel information of the specified object in the real scene of the entity can be rapidly extracted and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, so that the simulation degree of the mixed reality simulation is improved.
In an optional embodiment according to the first aspect, the calibrating the entity camera to obtain the preset target parameters of the entity camera specifically includes:
and acquiring an internal parameter matrix, an external parameter matrix and distortion correction parameters of the entity camera by adopting a chessboard calibration method.
In an optional embodiment according to the first aspect, the performing pose matching on the physical camera and the virtual camera so as to keep the positions of optical centers of the physical camera and the virtual camera and the shooting directions of the cameras consistent in real time during the movement process specifically includes:
placing a solid cube block with a known size in a solid scene, and tracking the solid cube block by adopting the solid camera; meanwhile, creating a virtual cube block model with the same size in the virtual scene and completing coordinate matching with the entity cube block;
and fixing the entity camera in front of the entity cube block, shooting a photo of the entity cube block, simultaneously controlling the virtual camera to shoot the virtual cube block model, comparing the shot virtual image with the real shot image, and adjusting the position and the orientation of the virtual camera until the shot virtual image coincides with the real shot image.
In an alternative embodiment according to the first aspect, the position and orientation of the virtual camera is adjusted such that, after the virtual image taken by the virtual camera coincides with the live image,
moving the entity cube block, shooting the moved entity image and virtual image again, comparing the shot virtual image with the entity image, and adjusting the position and orientation of the virtual camera again until the shot virtual image and the shot image coincide;
repeating the step of moving shooting until the shot virtual image and the entity image are matched in real time, and completing the matching of the spatial pose of the entity camera and the virtual camera.
In an alternative embodiment according to the first aspect, after the spatial pose matching of the physical camera and the virtual camera is completed, the current adjusted position and orientation are recorded as deviations so that the matching is kept at all times during the subsequent real-time tracking of the cameras.
In an alternative embodiment according to the first aspect, the physical camera and the virtual camera capture physical scenes and virtual scenes synchronously; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene, wherein the method specifically comprises the following steps:
and tracking a three-dimensional model of a target object in the physical scene in real time by adopting the virtual camera, extracting a contour and a pixel area of the target object in the image shot by the virtual camera by utilizing a projection matrix of the virtual camera, and synchronously extracting a shielding object and reserving a visible area of the target object if the physical target object is shielded by other objects in the physical scene.
In an optional embodiment according to the first aspect, extracting a corresponding solid area image in a solid scene captured by the solid camera according to the virtual target contour area;
the extracted solid region image is fused into a virtual target contour region in the virtual scene to complete virtual-real fusion, and the method specifically comprises the following steps:
and retrieving the visible region and the shielding object of the target object shot in the virtual camera from the solid region image shot by the solid camera, extracting the pixels of the visible region and the shielding object of the target object in the solid region image, feeding the extracted pixels back to the picture shot by the virtual camera, and replacing the pixels in the same corresponding region in the virtual camera to complete virtual-real fusion.
The second aspect of the application also provides a fusion system, which is based on the MR fusion display method; the fusion system comprises an entity camera, a processing unit, a motion capture system and a display unit; the physical camera is used for shooting images in a physical scene, the processing unit is used for constructing a virtual camera and calculating, and the motion capture system is used for tracking the position information of the physical camera and realizing the position matching of the virtual camera and the physical camera; the display unit is used for displaying images in a virtual scene, and the entity camera, the display unit and the motion capture system are all connected with the processing unit so as to realize target tasks of the entity camera, the virtual camera and the motion capture system.
The fusion system provided by the embodiment of the application is based on the MR fusion display method, so that the technical effect of rapidly extracting the pixel information of the specified object in the real scene of the entity and carrying out fusion display with the three-dimensional rendering image in the virtual scene is achieved, and the simulation degree of the mixed reality simulation is improved.
The third aspect of the application also provides a civil aircraft cockpit fusion system, which is based on the fusion system.
The civil aircraft cockpit fusion system provided by the embodiment of the application is based on the fusion system, so that the pixel information of the specified object in the real scene of the entity can be extracted rapidly and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, and the technical effect of the simulation degree of the mixed reality simulation is improved.
Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and other objects, features and advantages of embodiments of the present application will become more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Embodiments of the present application will now be described, by way of example and not limitation, in the figures of the accompanying drawings, in which:
fig. 1 is a schematic flow chart of an implementation of an MR fusion display method according to an embodiment of the present application;
fig. 2 is a schematic diagram of virtual-real fusion display of the MR fusion display method according to the embodiment of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
It should be understood that the following examples do not limit the order of execution of the steps in the methods claimed herein. The various steps of the methods of the present application can be performed in any order possible and in a cyclic manner without contradiction.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Mixed Reality (MR for short) is a new visual environment generated by combining the real world and the virtual world, and can realize coexistence of physical and digital objects and real-time interaction. The technical means has the advantages of being virtual and real, and compared with pure virtual simulation, partial key entity scenes can be fused into a simulation environment to increase the sense of reality of the simulation; compared to pure physical simulation, virtual scenes can be used to replace a large number of physical objects to reduce costs and increase variability.
With the development of computer software and hardware technology, the MR technology is gradually paid attention to the fields of aviation, aerospace, high-speed rail, ships, automobiles and the like, and application research and popularization are started in the aspects of product design evaluation, complex large-component manufacturing and assembly, operation training, maintenance guarantee and the like. The core element of the mixed reality is the fusion display of the virtual world and the real world, specifically, adding the picture of the real scene into the virtual scene, and breaking the boundary between the virtual and the real.
In the related art, aiming at the problem of virtual-real fusion, a general scheme is that a camera is arranged on a display unit and is close to the positions of two eyes, so as to collect a real physical scene under the visual angle of human eyes, further, the image recognition technology is utilized to extract target content in a shot picture, and finally, the partial content is overlapped into a rendering scene to realize the fusion between the virtual and the real.
However, the above method has no much requirement on the scene object itself, and can intercept a dynamic or static scene object, which has a difficulty in accurately identifying a target object and accurately extracting the pixels occupied by the target object in an image. At present, the mainstream recognition and extraction method adopts an artificial intelligence mode, a large number of sample training needs to be carried out aiming at a single target, and nevertheless, the problems that recognition cannot be carried out and error extraction cannot be avoided are difficult to avoid, and the effect is poor in practical application. In addition, since the simple image processing lacks depth information, it is difficult to deal with the problem of occlusion of scene objects during fusion.
In view of this, the MR fusion display method provided in the embodiments of the present application constructs the same virtual camera as the physical camera according to the preset target parameters of the physical camera; performing pose matching on the entity camera and the virtual camera; selecting a target object to be matched in the entity scene, and marking in the virtual scene; the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene; extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region; and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion. By the arrangement, pixel information of the specified object in the real scene of the entity can be rapidly extracted and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, so that the simulation degree of the mixed reality simulation is improved.
Referring to fig. 1 and fig. 2, an MR fusion display method provided in an embodiment of the present application includes:
calibrating the entity camera to obtain preset target parameters of the entity camera;
constructing a virtual camera identical to the entity camera according to preset target parameters of the entity camera;
performing pose matching on the entity camera and the virtual camera so as to enable the positions of optical centers of the entity camera and the virtual camera and the shooting directions of the cameras to be consistent in real time in the moving process;
selecting a target object to be matched in the entity scene, and marking in the virtual scene;
the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene;
extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene;
extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region;
and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion.
It should be noted that, in the MR fusion display method provided in the embodiment of the present application, a virtual camera identical to the entity camera is constructed according to the preset target parameters of the entity camera; performing pose matching on the entity camera and the virtual camera; selecting a target object to be matched in the entity scene, and marking in the virtual scene; the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene; extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region; and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion. By the arrangement, pixel information of the specified object in the real scene of the entity can be rapidly extracted and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, so that the simulation degree of the mixed reality simulation is improved.
In an alternative exemplary embodiment, the calibrating the entity camera to obtain the preset target parameters of the entity camera specifically includes:
and acquiring an internal parameter matrix, an external parameter matrix and distortion correction parameters of the entity camera by adopting a chessboard calibration method.
It should be noted that, in this embodiment, the calibrating the entity camera to obtain the preset target parameters of the entity camera specifically includes: and acquiring an internal parameter matrix, an external parameter matrix and distortion correction parameters of the entity camera by adopting a chessboard calibration method. It will be appreciated that the method is not limited to chessboard calibration, and that other methods may be used depending on the actual needs of the user. It is further understood that the preset target parameters are not limited to the internal parameter matrix, the external parameter matrix, and the distortion correction parameters of the physical camera, but may be other parameters required by the user.
In an alternative exemplary embodiment, the performing pose matching on the physical camera and the virtual camera so as to make the optical center positions of the physical camera and the virtual camera and the shooting directions of the cameras keep consistent in real time in the motion process, specifically including:
placing a solid cube block with a known size in a solid scene, and tracking the solid cube block by adopting the solid camera; meanwhile, creating a virtual cube block model with the same size in the virtual scene and completing coordinate matching with the entity cube block;
and fixing the entity camera in front of the entity cube block, shooting a photo of the entity cube block, simultaneously controlling the virtual camera to shoot the virtual cube block model, comparing the shot virtual image with the real shot image, and adjusting the position and the orientation of the virtual camera until the shot virtual image coincides with the real shot image.
It should be noted that, specifically, in this embodiment, in order to achieve pose matching of the physical camera and the virtual camera, first, a physical cube block with a known size is placed in the physical scene, and the physical cube block is tracked by using the physical camera, and meanwhile, a virtual cube block model with the same size is created in the virtual scene and coordinate matching with the physical cube block is completed, that is, a model identical to the physical cube block is created in the virtual scene. And fixing the entity camera in front of the entity cube block to shoot a photo of the entity cube block, simultaneously controlling the virtual camera to shoot the virtual cube block model, comparing the shot virtual image with the real shot image, adjusting the position and the orientation of the virtual camera until the shot virtual image coincides with the real shot image, and further realizing pose matching of the entity camera and the virtual camera.
It can be understood that in order to improve the pose matching accuracy of the entity camera and the virtual camera, the entity cube block and the virtual cube block can be moved for multiple times according to the requirement of a user, and shooting is performed for multiple times for comparison and adjustment, so that the pose matching accuracy of the entity camera and the virtual camera is effectively improved.
In particular, in an alternative exemplary embodiment, after adjusting the position and orientation of the virtual camera until the virtual image captured by the virtual camera coincides with the live image,
moving the entity cube block, shooting the moved entity image and virtual image again, comparing the shot virtual image with the entity image, and adjusting the position and orientation of the virtual camera again until the shot virtual image and the shot image coincide;
repeating the step of moving shooting until the shot virtual image and the entity image are matched in real time, and completing the matching of the spatial pose of the entity camera and the virtual camera.
Specifically, in order to ensure pose matching accuracy of the physical camera and the virtual camera, after adjusting the position and the orientation of the virtual camera until the virtual image shot by the virtual camera coincides with the real shot image, moving the physical cube, shooting the moved physical image and the virtual image again, comparing the shot virtual image with the physical image, and adjusting the position and the orientation of the virtual camera again until the virtual image shot by the virtual camera coincides with the real shot image; repeating the step of moving shooting until the shot virtual image and the entity image are matched in real time, and completing the matching of the spatial pose of the entity camera and the virtual camera. Namely, the entity cube blocks and the virtual cube blocks are moved for a plurality of times, so that the entity images shot by the entity cameras and the virtual images shot by the virtual cameras are kept matched for a plurality of times, and the accuracy of matching the space pose is improved.
In an alternative exemplary embodiment, after the spatial pose matching of the physical camera and the virtual camera is completed, the current adjusted position and orientation are recorded as deviations so that the matching is kept at all times during the subsequent real-time tracking of the cameras.
In this embodiment, after the spatial pose matching of the physical camera and the virtual camera is completed, the current adjusted position and the current adjusted orientation are recorded as the deviation, so that the matching is kept at all times in the subsequent process of tracking the camera in real time, that is, the adjusted pose matching data can be directly used in the subsequent use process, and the data can be directly called in the next operation process without debugging for multiple times.
In an alternative exemplary embodiment, the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene, wherein the method specifically comprises the following steps:
and tracking a three-dimensional model of a target object in the physical scene in real time by adopting the virtual camera, extracting a contour and a pixel area of the target object in the image shot by the virtual camera by utilizing a projection matrix of the virtual camera, and synchronously extracting a shielding object and reserving a visible area of the target object if the physical target object is shielded by other objects in the physical scene.
It should be noted that, in this embodiment, the virtual camera is used to track the three-dimensional model of the target object in the physical scene in real time, and the projection matrix of the virtual camera is used to extract the outline and the pixel area of the target object in the image shot by the virtual camera, if the physical target object is blocked by other objects in the physical scene, the blocking object can be synchronously extracted, and the visible area of the target object is reserved. The method can realize synchronous construction of the entity scene in the virtual scene, so that the information presented in the virtual scene is more simulated, and the synchronous presentation with the entity scene is kept.
In an alternative exemplary embodiment, extracting a corresponding solid area image in a solid scene shot by the solid camera according to the virtual target contour area;
the extracted solid region image is fused into a virtual target contour region in the virtual scene to complete virtual-real fusion, and the method specifically comprises the following steps:
and retrieving the visible region and the shielding object of the target object shot in the virtual camera from the solid region image shot by the solid camera, extracting the pixels of the visible region and the shielding object of the target object in the solid region image, feeding the extracted pixels back to the picture shot by the virtual camera, and replacing the pixels in the same corresponding region in the virtual camera to complete virtual-real fusion.
In particular, in this embodiment, the visible area and the blocking object of the target object captured in the virtual camera are retrieved from the image of the solid area captured by the solid camera, the visible area and the pixels of the blocking object of the target object in the image of the solid area are extracted, the extracted pixels are fed back to the image captured by the virtual camera, and virtual-real fusion can be completed by replacing the pixels in the same area corresponding to the image captured by the virtual camera, so that the pixel information of the target object in the real scene of the solid can be rapidly extracted and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, thereby improving the technical effect of the simulation degree of the mixed reality simulation.
The application also provides a fusion system based on the MR fusion display method, which comprises a physical camera, a processing unit, a motion capture system and a display unit; the physical camera is used for shooting images in a physical scene, the processing unit is used for constructing a virtual camera and calculating, and the motion capture system is used for tracking the position information of the physical camera and realizing the position matching of the virtual camera and the physical camera; the display unit is used for displaying images in a virtual scene, and the entity camera, the display unit and the motion capture system are all connected with the processing unit so as to realize target tasks of the entity camera, the virtual camera and the motion capture system.
It should be noted that, the processing unit is configured to construct a virtual camera and perform calculation, that is, the virtual camera is constructed and the virtual image captured by the virtual camera is constructed by the processing unit. The display unit is used for displaying images in the virtual scene and displaying the virtual scene constructed by the processing unit through the display unit.
In a specific embodiment, the processing unit may be a computer, and in the operation process of the fusion system, the related calculation, the processing of the picture, the generation and fusion of the picture, the display of the display unit, and the calculation of the dynamic capture system are all processed by the computer.
The display unit may be a VR headset or a display, for example.
The fusion system provided by the embodiment of the application is based on the MR fusion display method, so that the technical effect of rapidly extracting the pixel information of the specified object in the real scene of the entity and carrying out fusion display with the three-dimensional rendering image in the virtual scene is achieved, and the simulation degree of the mixed reality simulation is improved.
The application also provides a civil aircraft cockpit fusion system, which is based on the fusion system, that is to say, the fusion system provided by the application can be applied to the civil aircraft cockpit.
It can be appreciated that the applicable scenario of the fusion system provided in the present application is not limited to the civil aircraft cockpit, but may be in other suitable scenarios.
The civil aircraft cockpit fusion system provided by the embodiment of the application is based on the fusion system, so that the pixel information of the specified object in the real scene of the entity can be extracted rapidly and displayed in a fusion manner with the three-dimensional rendering image in the virtual scene, and the technical effect of the simulation degree of the mixed reality simulation is improved.
For example, in the civil aircraft cockpit fusion system, a virtual simulation environment is firstly required to be constructed according to simulation requirements, and part of key components (such as cockpit dashboards, operating levers and the like) are selected to construct a part of physical simulation platform. And after the construction is finished, the two are matched with each other in coordinates by utilizing a positioning system. In addition, in order to realize the acquisition of the real scene, a pair of entity cameras consistent with the human eye positions are installed at the human eye positions of the display unit, the entity cameras are used for shooting the entity scene, and the processing unit is used for constructing the virtual camera and calculating. Finally, extracting a projection area of the target object from the real scene image shot by the entity camera, fusing pixels of the area into the three-dimensional rendering picture constructed by the processing unit, and displaying the three-dimensional rendering picture.
Specifically, before simulation, a pair of physical cameras on a display unit needs to be calibrated to obtain parameter information of the cameras. After the calibration is completed, a digital copy of the camera is generated in the virtual scene according to the acquired camera parameters, and the position matching of the entity camera and the virtual camera can be realized by tracking the degree of freedom information of the entity camera by using the motion capture system because the calibration of the coordinate system is completed in the virtual scene. The virtual and real scenes are in corresponding states as the coordinate system is calibrated. Therefore, theoretically, when the physical camera shoots an object a in a scene, the virtual camera shoots a three-dimensional model A1 of the object in the virtual scene at the same time, and since the virtual and real cameras are consistent in parameters and spatial pose, the images of the same outline of the object a and the image of the object A1 are projected in the physical and virtual cameras. In the virtual scene, the projection matrix of the camera and the three-dimensional morphology of the A1 are known, so that the projection area of the A1 in the virtual camera can be easily and accurately known, the area is recorded, the image content of the same area is extracted from the photo shot by the entity camera, and the extraction of the imaging result of the A in the entity camera can be realized. And finally, covering the extraction result with the three-dimensional rendering result, and realizing the fusion display of virtual and real images.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, the various possible combinations are not described further.

Claims (9)

1. An MR fusion display method, comprising:
calibrating the entity camera to obtain preset target parameters of the entity camera;
constructing a virtual camera identical to the entity camera according to preset target parameters of the entity camera;
performing pose matching on the entity camera and the virtual camera so as to enable the positions of optical centers of the entity camera and the virtual camera and the shooting directions of the cameras to be consistent in real time in the moving process;
selecting a target object to be matched in the entity scene, and marking in the virtual scene;
the entity camera and the virtual camera synchronously shoot an entity scene and a virtual scene;
extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene;
extracting a corresponding entity region image in the entity scene shot by the entity camera according to the virtual target contour region;
and merging the extracted solid region image into a virtual target contour region in the virtual scene to complete virtual-real fusion.
2. The MR fusion display method according to claim 1, wherein the calibrating the physical camera to obtain the preset target parameters of the physical camera specifically comprises:
and acquiring an internal parameter matrix, an external parameter matrix and distortion correction parameters of the entity camera by adopting a chessboard calibration method.
3. The MR fusion display method according to claim 2, wherein the pose matching is performed on the physical camera and the virtual camera so that the optical center positions of the physical camera and the virtual camera and the shooting directions of the cameras are kept consistent in real time during the movement process, and the method specifically comprises:
placing a solid cube block with a known size in a solid scene, and tracking the solid cube block by adopting the solid camera; meanwhile, creating a virtual cube block model with the same size in the virtual scene and completing coordinate matching with the entity cube block;
and fixing the entity camera in front of the entity cube block, shooting a photo of the entity cube block, simultaneously controlling the virtual camera to shoot the virtual cube block model, comparing the shot virtual image with the real shot image, and adjusting the position and the orientation of the virtual camera until the shot virtual image coincides with the real shot image.
4. The MR fusion display method according to claim 3, wherein the virtual camera is adjusted in position and orientation until the virtual image captured by the virtual camera coincides with the real image,
moving the entity cube block, shooting the moved entity image and virtual image again, comparing the shot virtual image with the entity image, and adjusting the position and orientation of the virtual camera again until the shot virtual image and the shot image coincide;
repeating the step of moving shooting until the shot virtual image and the entity image are matched in real time, and completing the matching of the spatial pose of the entity camera and the virtual camera.
5. The MR fusion display method according to claim 4, wherein after the spatial pose matching of the physical camera and the virtual camera is completed, the current adjusted position and orientation are recorded as deviations so as to keep the matching at all times during the subsequent real-time tracking of the cameras.
6. The MR fusion display method according to claim 5, wherein the physical camera and the virtual camera synchronously shoot a physical scene and a virtual scene; extracting a virtual target contour area according to the shot model three-dimensional information in the virtual scene, wherein the method specifically comprises the following steps:
and tracking a three-dimensional model of a target object in the physical scene in real time by adopting the virtual camera, extracting a contour and a pixel area of the target object in the image shot by the virtual camera by utilizing a projection matrix of the virtual camera, and synchronously extracting a shielding object and reserving a visible area of the target object if the physical target object is shielded by other objects in the physical scene.
7. The MR fusion display method according to claim 6, wherein the corresponding solid region image in the solid scene captured by the solid camera is extracted according to the virtual target contour region;
the extracted solid region image is fused into a virtual target contour region in the virtual scene to complete virtual-real fusion, and the method specifically comprises the following steps:
and retrieving the visible region and the shielding object of the target object shot in the virtual camera from the solid region image shot by the solid camera, extracting the pixels of the visible region and the shielding object of the target object in the solid region image, feeding the extracted pixels back to the picture shot by the virtual camera, and replacing the pixels in the same corresponding region in the virtual camera to complete virtual-real fusion.
8. A fusion system, characterized in that it is based on the MR fusion display method according to any one of claims 1 to 7;
the fusion system comprises an entity camera, a processing unit, a motion capture system and a display unit; the physical camera is used for shooting images in a physical scene, the processing unit is used for constructing a virtual camera and calculating, and the motion capture system is used for tracking the position information of the physical camera and realizing the position matching of the virtual camera and the physical camera; the display unit is used for displaying images in a virtual scene, and the entity camera, the display unit and the motion capture system are all connected with the processing unit so as to realize target tasks of the entity camera, the virtual camera and the motion capture system.
9. A civil aircraft cockpit fusion system, characterized in that it is based on the fusion system of claim 8.
CN202310331199.5A 2023-03-30 2023-03-30 MR fusion display method, fusion system and civil aircraft cockpit fusion system Pending CN116527975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310331199.5A CN116527975A (en) 2023-03-30 2023-03-30 MR fusion display method, fusion system and civil aircraft cockpit fusion system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310331199.5A CN116527975A (en) 2023-03-30 2023-03-30 MR fusion display method, fusion system and civil aircraft cockpit fusion system

Publications (1)

Publication Number Publication Date
CN116527975A true CN116527975A (en) 2023-08-01

Family

ID=87394944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310331199.5A Pending CN116527975A (en) 2023-03-30 2023-03-30 MR fusion display method, fusion system and civil aircraft cockpit fusion system

Country Status (1)

Country Link
CN (1) CN116527975A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078975A (en) * 2023-10-10 2023-11-17 四川易利数字城市科技有限公司 AR space-time scene pattern matching method based on evolutionary algorithm
CN118521495A (en) * 2024-07-22 2024-08-20 杭州慧建智联科技有限公司 Training application virtual-real fusion method based on MR equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078975A (en) * 2023-10-10 2023-11-17 四川易利数字城市科技有限公司 AR space-time scene pattern matching method based on evolutionary algorithm
CN117078975B (en) * 2023-10-10 2024-01-02 四川易利数字城市科技有限公司 AR space-time scene pattern matching method based on evolutionary algorithm
CN118521495A (en) * 2024-07-22 2024-08-20 杭州慧建智联科技有限公司 Training application virtual-real fusion method based on MR equipment

Similar Documents

Publication Publication Date Title
CN116527975A (en) MR fusion display method, fusion system and civil aircraft cockpit fusion system
KR102125293B1 (en) Generation device, generation method, and storage medium
US20190164346A1 (en) Method and apparatus for providing realistic 2d/3d ar experience service based on video image
WO2021174389A1 (en) Video processing method and apparatus
CN106066701B (en) A kind of AR and VR data processing equipment and method
JP6921686B2 (en) Generator, generation method, and program
CN109246414B (en) Projection type augmented reality image generation method and system
Saito et al. Appearance-based virtual view generation from multicamera videos captured in the 3-d room
CN109448105B (en) Three-dimensional human body skeleton generation method and system based on multi-depth image sensor
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN106296789B (en) It is a kind of to be virtually implanted the method and terminal that object shuttles in outdoor scene
CN108280873A (en) Model space position capture and hot spot automatically generate processing system
US11847735B2 (en) Information processing apparatus, information processing method, and recording medium
JP2023511670A (en) A method and system for augmenting depth data from a depth sensor, such as by using data from a multi-view camera system
KR20180123302A (en) Method and Apparatus for Visualizing a Ball Trajectory
JP2023172882A (en) Three-dimensional representation method and representation apparatus
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN114913308A (en) Camera tracking method, device, equipment and storage medium
US20240054739A1 (en) Information processing apparatus, information processing method, and storage medium
JP2021157237A (en) Free viewpoint video generation method, device and program
CN111161143A (en) Optical positioning technology-assisted operation visual field panoramic stitching method
CN118451468A (en) Method for registering face mark
Mori et al. An overview of augmented visualization: observing the real world as desired
KR20230017745A (en) Image processing apparatus, image processing method, and storage medium
CN109360270B (en) 3D face pose alignment method and device based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination