WO2019140945A1 - 一种应用于飞行模拟器的混合现实方法 - Google Patents

一种应用于飞行模拟器的混合现实方法 Download PDF

Info

Publication number
WO2019140945A1
WO2019140945A1 PCT/CN2018/107555 CN2018107555W WO2019140945A1 WO 2019140945 A1 WO2019140945 A1 WO 2019140945A1 CN 2018107555 W CN2018107555 W CN 2018107555W WO 2019140945 A1 WO2019140945 A1 WO 2019140945A1
Authority
WO
WIPO (PCT)
Prior art keywords
cabin
outside
range
mixed reality
flight simulator
Prior art date
Application number
PCT/CN2018/107555
Other languages
English (en)
French (fr)
Inventor
来国军
王晓卫
纪双星
杨而蒙
王召峰
刘宝珠
Original Assignee
中国人民解放军陆军航空兵学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国人民解放军陆军航空兵学院 filed Critical 中国人民解放军陆军航空兵学院
Publication of WO2019140945A1 publication Critical patent/WO2019140945A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/301Simulation of view from aircraft by computer-processed or -generated image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/307Simulation of view from aircraft by helmet-mounted projector or display

Definitions

  • the present invention relates to the field of virtual reality, and more particularly to a mixed reality method applied to a flight simulator.
  • mixed reality is gradually applied in various simulation fields.
  • the camera or sensing device captures the image and the positioning is not accurate; the human body part is transmitted through optical or inertia.
  • the sensing device is materialized in the virtual scene and cannot materialize various devices; because the human skin and clothing in the virtual scene lacks realism, it cannot make people feel real immersion; the viewing angle field is limited by the head-mounted display or the camera shooting angle.
  • the present invention provides a hybrid reality method for use in a flight simulator.
  • step S1 adaptively spatially mapping all devices and personnel including the range depth in the simulated cabin by two cameras disposed at the position of the human eye, Perform three-dimensional reconstruction within the range of view, perform fine three-dimensional reconstruction in the center of the focus of both eyes, and perform rough three-dimensional reconstruction outside the focus range of both eyes;
  • Step S2 collecting real-time information of the feature points outside the simulated cabin and the relative spatial position information of the human body obtained by the VR helmet and the positioner by using a camera fixed at a fixed position and an angle around the simulation cabin, and compensating and correcting the virtual scene and Matching the depth information and the real distance information in the virtual scene to obtain a relative positional relationship between the virtual scene and the VR helmet;
  • Step S3 layering the virtual picture by tracking the pupil and head movement of the eye, and sequentially rendering the position of the pupil from the inside to the outside, and the resolution of the rendering is sequentially reduced from the inside to the outside.
  • step S1 when all the devices and persons under the range depth in the simulation cabin are adaptively spatially mapped by the two cameras arranged in the human eye position: the two cameras are photographed.
  • the spatial region is segmented, and three small to large voxel sizes are sequentially used for segmentation, and the same spatial region is divided into three sets of voxel sets with data independence.
  • the three-dimensional reconstruction is performed in the visible range of the two eyes, and the three-dimensional reconstruction is performed on the center of the binocular focus, and the rough three-dimensional reconstruction is performed outside the focus range of the binocular: determining the state of each voxel and obtaining a non-empty voxel Set; real-time tracking of the binocular focus range, selecting the minimum size voxel set for the region within the center of the binocular focus, and selecting the portion within the center of the binocular focus and within the viewable region to be greater than the center of the binocular focus
  • the voxel set of sizes selects the voxel set of the largest size for the other visible ranges of the eyes, and realizes the reconstruction effect from the inside to the outside of the binocular focus range from fine to rough; then pixelated texture mapping to obtain the virtual scene 3D texture effect.
  • the camera fixed at a fixed position and angle around the simulation cabin passes The camera is calibrated by the least squares method, and the initial position of the feature points on the static simulated cabin surface is recorded.
  • the initial pose of the simulated cabin is obtained through the stereo imaging principle and coordinate transformation, and the initial coordinate system is established.
  • the camera is dynamically captured by a fixed position and an angle fixed around the simulation cabin, and the feature points are extracted by the SURF algorithm, and the KLT algorithm is used for feature point matching.
  • the KLT algorithm is used for feature point matching.
  • the world coordinates of the feature points when the simulated cabin moves are obtained by the camera fixed at a fixed position and angle around the simulation cabin, and the change of the feature points with respect to the initial coordinates at the moment is calculated, thereby obtaining the positional relationship of the simulated cabin, and
  • the relative spatial position information of the human body is obtained by the VR helmet and the positioner, and the virtual picture is dynamically corrected by reverse conversion according to the position information of the simulation cabin and the human body in the initial coordinate system, and the internal virtual scene and the external real scene space sense are matched. Sense of distance.
  • the feature point when extracting the feature point, adopts a feature point of a circular black and white contrast color in which the inner circle is a white outer circle and black, and the Gaussian difference pyramid of the image is captured by the camera when the feature point is extracted by the SURF algorithm.
  • the DOG extracts the first layer image of each group of images, adopts the maximum inter-class difference method for each group of images to perform adaptive threshold segmentation to obtain a binary map, and uses the binary map as a constraint condition to perform feature point detection to make SURF Detection is performed at the inner edge region of the feature point.
  • the optical flow is calculated at the highest level of the Gaussian difference pyramid DOG, and the calculated motion estimation is used as the initial point of the next layer calculation, and iteratively proceeds to the lowest layer.
  • the different regions and different levels of the scene are respectively rendered in the unigine engine, and the gaze center is used to gaze at the center of the eye.
  • the image enhancement method of the piecewise linear gray level is used to enhance the recognizability of the image, and then the median filtering is used to reduce the noise, the feature region is quickly segmented by the iterative method, and the pupil region is obtained by the area characteristic and the roundness value characteristic. Screening is performed, and finally the center position of the pupil is obtained by ellipse fitting based on the least squares method, and the range of binocular focusing is obtained by the center position of the pupil.
  • the mixed reality method applied to the flight simulator provided by the invention can perform real-time three-dimensional reconstruction in a space scene, enhance the real sense of human skin and clothing in the virtual scene, enhance the sense of space and distance of the human body, and make people real
  • the simulation cabin of the solution of the invention synchronizes the real scene in the cockpit to the virtual scene in front by the positioning tracking and the computer vision algorithm, and directly senses the real equipment in the cockpit through the human body visual tactile hearing, thereby better solving the virtual real space.
  • the consistency problem enhances the practicability of the flight training simulation system, reduces the difficulty of upgrading the immersion of the original flight simulator, and improves the flight experience of the trainers;
  • Eye tracking and adaptive focusing not only ensure the human eye to see the realism and immersion of the scene, but also achieve local rendering, which is consistent with the human eye imaging features, without the need for the human eye to actively adapt to the screen, avoiding eye fatigue caused by excessive eye. Reduces the occupancy of hardware devices while improving rendering and processing speed.
  • FIG. 1 is a schematic diagram of a camera of a VR helmet of a hybrid reality method applied to a flight simulator of the present invention
  • FIG. 2 is a schematic diagram of three-dimensional voxel three-dimensional reconstruction of the sub-precision of the mixed reality method applied to the flight simulator of the present invention
  • FIG. 3 is a schematic diagram of feature points of a mixed reality method applied to a flight simulator of the present invention
  • FIG. 4 is a schematic diagram of a mixed reality simulation cabin of a hybrid reality method applied to a flight simulator of the present invention
  • FIG. 5 is a schematic flow chart of a method for calculating a position of a pupil of a mixed reality method applied to a flight simulator of the present invention.
  • Step S1 adaptively spatially map all devices and personnel including the range depth in the simulated cabin by two cameras (shown in FIG. 1 , that is, two cameras on the VR helmet) disposed at the position of the human eye. 3D reconstruction in the visible range of both eyes, fine three-dimensional reconstruction in the center of the binocular focus, and rough three-dimensional reconstruction outside the binocular focus range;
  • Step S2 collecting real-time information of the feature points outside the simulated cabin and the relative spatial position information of the human body obtained by the VR helmet and the positioner by using a camera fixed at a fixed position and an angle around the simulation cabin, and compensating and correcting the virtual scene and Matching the depth information and the real distance information in the virtual scene to obtain a relative positional relationship between the virtual scene and the VR helmet;
  • Step S3 layering the virtual picture by tracking the pupil and head movement of the eye, and sequentially rendering the position of the pupil from the inside to the outside, and the resolution of the rendering is sequentially decreased from the inside to the outside;
  • the object seen by the two eyes when the object seen by the two eyes is reconstructed three-dimensionally, it can be regarded as a process of building blocks, and the three-dimensional reconstruction is equivalent to constructing an object with the building blocks, and the construction process is divided into fine and rough, and the eyes are focused.
  • the process of reconstructing the texture map includes displaying the skin and clothing texture of the human body on the reconstructed 3D model;
  • the spatial regions captured by the two cameras are segmented, and are sequentially used in the segmentation.
  • Three small to large voxel sizes are segmented, and the same spatial region is divided into three sets of voxel sets with data independence;
  • three-dimensional reconstruction is performed in the visible range of both eyes, and fine three-dimensional reconstruction is performed in the center range of the binocular focus, and rough three-dimensional reconstruction is performed on the outside of the binocular focus range: through the two cameras on the VR helmet, the combination can be According to the reconstruction method of the outer shell, determine the state of each voxel, and obtain a non-empty voxel set; track the focus range of both eyes, select different sets of voxel sets for different visual regions, and focus on the center of the binocular focus The region selects the voxel set of the smallest size, selects a voxel set larger than the center range of the binocular focus outside the center of the binocular focus, and selects the largest size body for the other visible ranges of the binocular.
  • the set of primes realizes the reconstruction effect from the inside to the outside of the binocular focus range from fine to rough, and then performs pixelated texture mapping to obtain the three-dimensional texture effect of the virtual scene, realizing the establishment of a three-dimensional model from fine to rough, bringing training The presence of real objects in the virtual world of personnel;
  • the voxels obtained by spatial segmentation have independent data and can be saved in the table first.
  • the data can be directly read by looking up the table to improve the speed of 3D reconstruction;
  • the position of the eye gaze can be determined according to the data obtained by the eyeball tracking, and the sub-precision reconstruction is performed.
  • the eye gaze area has the highest precision, and the outward precision is sequentially decreased, which is in accordance with the principle of viewing the physical object by the human eye. It can improve the speed of 3D reconstruction and reduce the GPU occupancy.
  • the human eye in order to accurately track the limb parts and the cabin equipment that the training personnel can observe, whether the human body is in a standing posture or a sitting posture, or the six-degree-of-freedom motion platform is in motion, the human eye can be realized.
  • the position of the two cameras that need to accurately locate the position of the human eye ie the position of the VR helmet: through two cameras around the simulation cabin, fixed position, fixed angle, real-time
  • the feature point information is collected, wherein the feature point is fixed outside the simulation cabin, and the position and posture of the simulation cabin are obtained by the binocular stereo vision method; the virtual scene is compensated by the relative spatial position information of the human body obtained by the VR helmet and the positioner.
  • the camera fixed at a fixed position and angle around the simulation cabin passes the OpenCV library, and the minimum is adopted.
  • the two-time method is used to calibrate the parameters of the simulation cabin.
  • the parameters include internal and external parameters, and record the initial position of the feature points on the surface of the static simulation cabin.
  • the initial pose of the simulation cabin is obtained through the stereo imaging principle and coordinate transformation. Coordinate System;
  • the dynamic change feature points are captured in real time by fixed position and angle cameras fixed around the simulation cabin, and feature points are extracted by SURF algorithm.
  • KLT algorithm is used for feature point matching to realize simulation cabin fast.
  • the world coordinates of the feature points when the simulated cabin is moved are obtained by the camera fixed at a fixed position and angle around the simulation cabin, and the change of the feature points with respect to the initial coordinates at that time is calculated, thereby obtaining the positional relationship of the simulated cabin, and passing the VR helmet and
  • the locator obtains the relative spatial position information of the human body, dynamically corrects the virtual picture by reverse conversion according to the position information of the simulation cabin and the human body in the initial coordinate system, and matches the spatial feeling and the sense of distance between the internal virtual scene and the external real scene;
  • the schematic diagram of the feature points is as shown in FIG. 3, and the feature points are characterized by a circular black and white contrast color whose inner ring is a white outer ring and black, and the feature points are composed of three circles, forming one, etc.
  • the waist triangle so a strong edge structure can be obtained.
  • the inter-class difference method performs adaptive threshold segmentation to obtain a binary map, and uses the binary graph as a constraint condition to perform feature point detection, so that SURF detection is performed in the inner edge region of the feature point, that is, the threshold is obtained by the maximum inter-class difference method.
  • the threshold is obtained by the threshold, and the area obtained by the binary map is limited.
  • the SURF algorithm is used to detect only the interior of the area, thereby improving the accuracy of feature point extraction.
  • the traditional KLT algorithm is aimed at the small and coherent motion in the feature window.
  • the baseline distance between the two cameras or the angle between the two cameras is too large, the displacement of the feature points in the left and right camera images will be too large, and the matching accuracy is not high.
  • the KLT algorithm performs feature point matching, the optical flow is calculated at the highest level of the Gaussian difference pyramid DOG, and the calculated motion estimation is used as the initial point of the next layer calculation. After multiple iterations to the lowest layer, the feature point matching is finally improved. Accuracy, tracking matching for faster and longer motions;
  • step S3 the scene is sub-regionally rendered according to the eyeball gaze position, and the display effect of a certain part of the image is adjusted, which is equivalent to two-dimensional pixel adjustment; by installing the infrared camera in the VR helmet, the eye can be clearly captured.
  • Partial feature layered rendering of virtual images by tracking pupils and head movements of the eye: eye tracking by image graying, image filtering, binarization, edge detection, and pupil center positioning, through eye position
  • the different regions and different levels of the scene are respectively rendered in the unigine engine, and the high-, medium-, and low-level renderings are sequentially performed from the inside to the outside with the pupil gaze portion of the eye as the center, and the edge is lowered.
  • the Unigine engine When the Unigine engine is rendering, the position of the human eye is at a high image resolution, so that the scene details are obvious.
  • the natural environment and geomorphic information in the unigne engine is equivalent to a picture, and the image details change according to the distance: for example, at very far, The color of the land is khaki with some green color. You can see more details when you are closer. For example, where is the forest, some systems will load some pictures of the tree on this basis.
  • the position of the center of the pupil of the positioning is mapped to the viewing angle area viewed by the human eye in the three-dimensional scene, and the layer is rendered outwardly by the layer, and is divided into three layers from the inside to the outside, and the high, medium and low level renderings are performed in turn, the innermost
  • the layer is the clearest, the outward resolution is reduced, and the blur is gradually blurred, which saves pixels and improves rendering speed.
  • the position of the pupil in the three-dimensional scene is mapped to the viewing angle area of the human eye by the positioned pupil center position.
  • the present invention includes an image pre-processing process in which the position of the pupil is centered from the inside to the outside, and the image pre-processing process is adopted: the recognizability of the image enhanced image using the piecewise linear gray-scale change is reduced by median filtering. Noise, the feature region is quickly segmented by iterative method, and the pupil region is screened by the area characteristics and the roundness value. Finally, the center position of the pupil is obtained by ellipse fitting based on the least squares method, and the binocular focus is calculated by the center position of the pupil. range;
  • the relationship between the two cameras, the infrared camera and the VR helmet of the present invention after the VR helmet is brought on, the human eye can not see the real world, and only the images in the two lenses inside the VR helmet can be seen, and the outer eye of the VR helmet
  • the location, the installation of two cameras, can directly capture the real scene in the direction of the eye, through these two cameras for real-time 3D reconstruction; infrared camera installed inside the VR helmet, facing the human eye, you can shoot the human eye
  • the activity is used to locate the pupil position of the human eye.
  • the position of the lens can be obtained, and the image displayed on the lens can be sub-region rendered with different resolutions; mixed virtual reality
  • the realistic scene is the real scene captured by the camera, and then reconstructed in the virtual scene for a virtual and real mix.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Computer Hardware Design (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种应用于飞行模拟器的混合现实方法,通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建;通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息以及通过VR头盔和定位器获得的人体相对空间位置信息,对虚拟场景进行补偿和修正以及对虚拟场景内的深度信息和真实距离信息进行匹配,获得虚拟场景与VR头盔的相对位置关系;通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染,以瞳孔的位置为中心点由内到外依次进行渲染,渲染的分辨率由内到外依次降低。

Description

一种应用于飞行模拟器的混合现实方法 技术领域
本发明涉及虚拟现实领域,尤其涉及一种应用于飞行模拟器的混合现实方法。
背景技术
随着混合现实技术不断发展,在各种模拟领域逐步开始运用混合现实。目前此类基于混合现实技术的飞行模拟器中存在一些不足:飞行模拟器处在不同姿态或运动、振动时,摄像头或传感设备拍摄画面抖动及定位不准;人体部位是通过光学或惯性传感设备在虚拟场景实体化,无法实体化各种设备;由于虚拟场景中的人体皮肤和服装缺乏真实感,不能使人产生真实的沉浸感;视角视场受头戴显示器或摄像头拍摄视角限制,不能跟踪眼球及自适应对焦;人在真实世界中的视觉听觉触觉和空间感与虚拟世界不一致,产生的虚幻感和眩晕感较强;系统不能快速集成不同厂家的飞行模拟器,进行快捷升级改造和二次开发。这些问题导致现实和虚拟世界无法融合产生真实可信的可视化环境,在新的可视化环境里物理和数字对象无法共存,并实时互动,降低了飞行模拟器的实用性。
发明内容
为克服现有技术问题,本发明提供一种应用于飞行模拟器的混合现实方法。
一种应用于飞行模拟器的混合现实方法,步骤S1:通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行 自适应的空间映射,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建;
步骤S2:通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息以及通过VR头盔和定位器获得的人体相对空间位置信息,对虚拟场景进行补偿和修正以及对虚拟场景内的深度信息和真实距离信息进行匹配,获得虚拟场景与VR头盔的相对位置关系;
步骤S3:通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染,以瞳孔的位置为中心点由内到外依次进行渲染,渲染的分辨率由内到外依次降低。
优选的是,在所述步骤S1中,通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射时:对两个摄像头拍摄的空间区域进行分割,在分割时分别依次采用三种由小到大的体素尺寸进行分割,将相同的空间区域分割成为三组具有数据独立性的体素集合。
优选的是,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建时:确定每个体素的状态,并获得非空体素集合;对双眼聚焦范围进行实时追踪,对双眼聚焦的中心范围内的区域选取最小尺寸的体素集合,对双眼聚焦的中心范围外、可视区域以内的部分选择大于双眼聚焦的中心范围内的尺寸的体素集合,对双眼其它可视范围选择最大尺寸的体素集合,实现重建效果由 双眼聚焦范围的由内到外依次由精细到粗糙;然后进行像素化的纹理映射,得到虚拟场景的三维纹理效果。
优选的是,在所述步骤S2中,通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息时:固定在模拟舱周围的固定位置和角度的摄像机通过最小二乘法对所述摄像机进行参数标定,并对静态的模拟舱表面的特征点初始位置进行记录,通过立体成像原理和坐标转换得到模拟舱的初始位姿,建立初始坐标系。
优选的是,在模拟舱运动的过程中,通过固定在模拟舱周围的固定位置和角度的摄像机实时拍摄动态变化的特征点,并通过SURF算法进行特征点提取,采用KLT算法进行特征点匹配,实现模拟舱快速运动产生位移变化时,对特征点进行跟踪匹配。
优选的是,通过固定在模拟舱周围的固定位置和角度的摄像机得到模拟舱运动时的特征点的世界坐标,计算该时刻特征点相对于初始坐标的变化,从而得到模拟舱的位置关系,并通过VR头盔和定位器获得人体相对空间位置信息,根据在初始坐标系中的模拟舱和人体的位置信息,通过逆向转换对虚拟画面进行动态修正,并匹配内部虚拟场景与外部真实场景空间感和距离感。
优选的是,在提取特征点时,所述特征点采用内圈为白色外圈为黑色的圆形黑白对比颜色的特征点,通过SURF算法进行特征点提取时,通过相机拍摄图像的高斯差分金字塔DOG并提取每组图像的第一层图像,对 每组图像采取最大类间差分法进行自适应阈值分割获得二值图,并将该二值图作为约束条件进行特征点检测,使SURF检测在特征点的内部边缘区域进行。
优选的是,采用KLT算法进行特征点匹配时,在高斯差分金字塔DOG的最高层计算光流,把计算得到的运动估值作为下一层计算的初始点,经过多次迭代直至最底层。
优选的是,通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染时:在unigine引擎中对场景的对不同区域和不同层次分别进行渲染,采用以眼部的瞳孔中心注视位置由内向外,依次进行高、中、低层次渲染。
优选的是,采用分段线性灰度变化的图像增强法增强图像的可识别度,然后采用中值滤波降低噪点,通过迭代法快速分割特征区域,并通过面积特性和圆度值特性对瞳孔区域进行筛选,最后利用基于最小二乘法的椭圆拟合得到瞳孔的中心位置,通过瞳孔的中心位置获得双眼聚焦的范围。
本发明的有益效果:
本发明提供的应用于飞行模拟器的混合现实方法,能够在空间场景内进行实时三维重建,提升虚拟场景中的人体皮肤和服装的真实感,提升人体的空间感和距离感,使人产生真实的沉浸感,基于眼动追踪的分区域,分精度三维重建,提升了场景渲染速度和效率;
本发明的方案的模拟舱通过定位追踪和计算机视觉算法将座舱内的现实场景同步到眼前的虚拟场景中,直接通过人的肢体视觉触觉听觉感受座 舱内的真实设备,较好地解决了虚实空间的一致性问题,增强了飞行训练模拟系统的实用性,降低了原有飞行模拟器升级沉浸感的难度,改善了训练人员的飞行体验;
眼动跟踪和自适应聚焦既保证了人眼看到场景的真实感和沉浸感,又能实现局部渲染,与人眼成像特征相切合,无需人眼主动适应屏幕,避免用眼过度造成眼疲劳,降低了硬件设备的占用率,同时提升了渲染效果和处理速度。
附图说明
图1为本发明的应用于飞行模拟器的混合现实方法的VR头盔的摄像头示意图;
图2为本发明的应用于飞行模拟器的混合现实方法的分精度的三种体素三维重建的示意图;
图3为本发明的应用于飞行模拟器的混合现实方法的特征点的示意图;
图4为本发明的应用于飞行模拟器的混合现实方法的混合现实模拟舱的示意图;
图5为本发明的应用于飞行模拟器的混合现实方法的瞳孔的位置计算方法的流程示意图。
具体实施方式
一种应用于飞行模拟器的混合现实方法:
步骤S1:通过配置于人眼位置的两个摄像头(如图1所示,即VR头盔上的两个摄像头)对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建;
步骤S2:通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息以及通过VR头盔和定位器获得的人体相对空间位置信息,对虚拟场景进行补偿和修正以及对虚拟场景内的深度信息和真实距离信息进行匹配,获得虚拟场景与VR头盔的相对位置关系;
步骤S3:通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染,以瞳孔的位置为中心点由内到外依次进行渲染,渲染的分辨率由内到外依次降低;
进一步地,在所述步骤S1中,对双眼看到的物体进行三维重建时,可以看成搭积木的过程,三维重建相当于用积木搭建一个物体,搭建过程分为精细和粗糙,双眼聚焦的部分,采用精细的过程,使用较小块的积木,不聚焦的部分,采用粗糙的过程,使用较大的积木,这样就节省了积木的数量,从而达到加速三维重建的效果;在用摄像头进行三维重建时,重建包括纹理贴图的过程,把自身人体的皮肤和衣服纹理在重建的三维模型上显示;
通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射时:对两个摄像头拍摄的空间区 域进行分割,在分割时分别依次采用三种由小到大的体素尺寸进行分割,将相同的空间区域分割成为三组具有数据独立性的体素集合;
如图2所示,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建时:通过VR头盔上的两个摄像头,结合可视外壳的重建方法,确定每个体素的状态,并获得非空体素集合;对双眼聚焦范围进行追踪,对不同的视觉区域选取相应的不同组的体素集合,对双眼聚焦的中心范围内的区域选取最小尺寸的体素集合,对双眼聚焦的中心范围外、可视区域以内的部分选择大于双眼聚焦的中心范围内的尺寸的体素集合,对双眼其它可视范围选择最大尺寸的体素集合,实现重建效果由双眼聚焦范围的由内到外依次由精细到粗糙,然后进行像素化的纹理映射,得到虚拟场景的三维纹理效果,实现由精细到粗糙的三维模型建立,带给训练人员虚拟世界中真实物体的存在感;
通过空间分割得到的体素,具有独立的数据,可先保存在表中,后期三维重建时可通过查表直接读取数据,提高三维重建速度;
通过体素三维重建时,可根据眼球追踪得到的数据,确定眼部注视的位置,进行分精度重建,眼部注视区域精度最高,依次向外精度递减,符合人眼看实物的观看实物的原理,既能提高三维重建的速度,又能降低GPU占用率。
在所述步骤S2中,为了准确追踪训练人员所能观察到的肢体部位和舱内设备,实现不论人体处于站姿还是坐姿,又或者六自由度运动平台处于 运动状态,都能够实现人眼所观察到的真实场景和虚拟环境的准确融合,那么需要准确定位人眼位置的两个摄像头的位置(即VR头盔的位置):通过两台在模拟舱周围,固定位置、固定角度的摄像机,实时采集特征点信息,其中特征点固定在模拟舱的外部,通过双目立体视觉的方法,获得模拟舱的位置和姿态;通过VR头盔和定位器获得的人体相对空间位置信息,对虚拟场景进行补偿和修正以及对虚拟场景内的深度信息和真实距离信息进行匹配,获得虚拟场景与VR头盔的相对位置关系;其中,定位器固定于外部独立的固定位置上,通过计算该固定位置与VR头盔的相对坐标获得相对位置信息;如此对六自由度平台的运动所产生的效果,在虚拟场景中进行补偿和修正,达到内部虚拟场景和外部真实模型的匹配,场景内深度信息与真实距离的匹配,避免了运动时对摄像机和定位器带来的误差:
如图4所示,通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱内的实时特征点信息时:固定在模拟舱周围的固定位置和角度的摄像机通过OpenCV库,采用最小二乘法对所述模拟舱进行参数标定,参数包括内参和外参,并对静态的模拟舱表面的特征点初始位置进行记录,通过立体成像原理和坐标转换得到模拟舱的初始位姿,建立初始坐标系;
在模拟舱运动的过程中,通过固定在模拟舱周围的固定位置和角度的摄像机实时拍摄动态变化的特征点,并通过SURF算法进行特征点提取,采用KLT算法进行特征点匹配,实现模拟舱快速运动产生位移变化时,对特征点进行跟踪匹配;
通过固定在模拟舱周围的固定位置和角度的摄像机得到模拟舱运动时的特征点的世界坐标,计算该时刻特征点相对于初始坐标的变化,从而得到模拟舱的位置关系,并通过VR头盔和定位器获得人体相对空间位置信息,根据在初始坐标系中的模拟舱和人体的位置信息,通过逆向转换对虚拟画面进行动态修正,并匹配内部虚拟场景与外部真实场景空间感和距离感;
在提取特征点时,特征点的示意图如图3所示,所述特征点采用内圈为白色外圈为黑色的圆形黑白对比颜色的特征点,特征点由三个圆构成,组成一个等腰三角形,因此可以得到很强的边缘结构,其通过SURF算法进行特征点提取时,通过相机拍摄图像的高斯差分金字塔DOG并提取每组图像的第一层图像,对每组图像采取最大类间差分法进行自适应阈值分割获得二值图,并将该二值图作为约束条件进行特征点检测,使SURF检测在特征点的内部边缘区域进行,即通过最大类间差分法获取阈值,通过阈值得到二值图,通过二值图得到的区域进行限制,用SURF算法只对这个区域内部进行检测,从而提高特征点提取的准确性;
传统KLT算法针对特征窗口内小而连贯运动的假设,当两相机基线距离或两相机夹角过大时,会导致左右相机拍摄图像中特征点位移过大,匹配精度不高,而本发明采用KLT算法进行特征点匹配时,在高斯差分金字塔DOG的最高层计算光流,把计算得到的运动估值作为下一层计算的初始点,经过多次迭代直至最底层,最终提高特征点匹配的准确度,实现对更 快更长运动的跟踪匹配;
在步骤S3中,根据眼球注视位置对场景进行分区域渲染,把看到图像中某一部分的显示效果进行调整,相当于二维像素调整;通过将红外摄像头安装在VR头盔中,可清晰拍摄眼部特征;通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染时:通过图像灰度化,图像滤波,二值化、边缘检测以及瞳孔中心定位法实现眼动跟踪,通过眼球位置,根据人眼注视原理,在unigine引擎中对场景的对不同区域和不同层次分别进行渲染,采用以眼部的瞳孔注视部分为中心由内向外,依次进行高、中、低层次渲染,降低边缘分辨率,降低GPU占用率,提高运行帧率,大幅提升渲染效率。Unigine引擎在渲染的时,人眼注视的位置采用高图像分辨率,这样场景细节明显,unigine引擎里的自然环境和地貌信息相当于一张图,图像细节根据距离改变:比如非常远的时候,陆地颜色是土黄色夹杂着一些绿色,距离再近一些就能看清楚更多细节,例如哪里是森林,距离再近一些系统会在这个基础上加载一些树的图片,在距离更近的时候,会有树的细节,树叶,树干和纹理等;在以上过程中这里对人眼注视区域显示更丰富的细节,注视区域显示粗糙的场景;同时这些渲染部在同一面上,例如位于前面的树,树后的山地等处在不同层次上,所以这里的渲染在不同区域和不同层次上进行,最后体现在VR头盔里显示的相当于场景的不同分辨率。
通过定位的瞳孔中心位置,映射到三维场景中人眼观看的视角区域, 通过这以区域向外分层渲染,由内到外分为三层,依次进行高、中、低层次渲染,最内层最清晰,向外分辨率降低,逐渐模糊,可节省像素,提高渲染速度。通过定位的瞳孔中心位置,映射到三维场景中人眼观看的视角区域。
如图5所示,本发明以瞳孔的位置为中心点由内到外依次进行渲染时包括图像预处理过程:采用分段线性灰度变化的图像增强图像的可识别度,采用中值滤波降低噪点,通过迭代法快速分割特征区域,并通过面积特性和圆度值特性对瞳孔区域进行筛选,最后利用基于最小二乘法的椭圆拟合得到瞳孔的中心位置,通过瞳孔的中心位置计算得到双眼聚焦范围;
本发明的两个摄像头、红外摄像头与VR头盔的关系:VR头盔带上之后,人的眼睛看不到真实世界,只能看到VR头盔里面两个镜片里的图像,在VR头盔的外面眼的部位,安装两个摄像头,可以直接拍摄到眼的方向的真实场景,通过这两个摄像头进行实时三维重建;红外摄像头安装在VR头盔内部,正对着人的眼睛,可以拍摄到人眼部的活动,用来定位人的眼球的瞳孔位置,通过人眼的位置,可以得到眼看镜片的位置,就可以对镜片上显示的图像,进行分区域的渲染,采用不同分辨率;混合虚拟现实中的现实场景,就是摄像头拍摄的真实景,然后重建在虚拟场景里,进行虚拟和真实的混合。
通过VR头盔前面的两个摄像头拍摄模拟舱内部,实时三维重建到VR头盔里显示的虚拟场景中,当然不只局限在仪表面板和窗外场景虚实融合, 还可以虚拟场景显示飞机内部座舱环境和飞机外面环境,三维重建仪表显示部分,人看到的自己的手、腿等。
以上所述的实施例仅仅是对本发明的优选实施方式进行描述,并非对本发明的范围进行限定,在不脱离本发明设计精神的前提下,本领域普通技术人员对本发明的技术方案作出的各种变形和改进,均应落入本发明权利要求书确定的保护范围内。

Claims (10)

  1. 一种应用于飞行模拟器的混合现实方法,其特征在于:
    步骤S1:通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射,对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建;
    步骤S2:通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息以及通过VR头盔和定位器获得的人体相对空间位置信息,对虚拟场景进行补偿和修正以及对虚拟场景内的深度信息和真实距离信息进行匹配,获得虚拟场景与VR头盔的相对位置关系;
    步骤S3:通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染,以瞳孔的位置为中心点由内到外依次进行渲染,渲染的分辨率由内到外依次降低。
  2. 根据权利要求1所述的应用于飞行模拟器的混合现实方法,其特征在于:
    在所述步骤S1中,通过配置于人眼位置的两个摄像头对模拟舱内的范围深度下的所有设备、人员包括自身进行自适应的空间映射时:对两个摄像头拍摄的空间区域进行分割,在分割时分别依次采用三种由小到大的体素尺寸进行分割,将相同的空间区域分割成为三组具有数据独立性的体素集合。
  3. 根据权利要求2所述的应用于飞行模拟器的混合现实方法,其特征在于:
    对双眼可视范围内进行三维重建,对双眼聚焦的中心范围内进行精细三维重建,对双眼聚焦范围外进行粗糙三维重建时:确定每个体素的状态,并获得非空体素集合;对双眼聚焦范围进行实时追踪,对双眼聚焦的中心范围内的区域选取最小尺寸的体素集合,对双眼聚焦的中心范围外、可视区域以内的部分选择大于双眼聚焦的中心范围内的尺寸的体素集合,对双眼其它可视范围选择最大尺寸的体素集合,实现重建效果由双眼聚焦范围的由内到外依次由精细到粗糙;然后进行像素化的纹理映射,得到虚拟场景的三维纹理效果。
  4. 根据权利要求1所述的应用于飞行模拟器的混合现实方法,其特征在于:
    在所述步骤S2中,通过固定在模拟舱周围的固定位置和角度的摄像机采集所述模拟舱外的特征点实时信息时:固定在模拟舱周围的固定位置和角度的摄像机通过最小二乘法对所述摄像机进行参数标定,并对静态的模拟舱表面的特征点初始位置进行记录,通过立体成像原理和坐标转换得到模拟舱的初始位姿,建立初始坐标系。
  5. 根据权利要求4所述的应用于飞行模拟器的混合现实方法,其特征在于:
    在模拟舱运动的过程中,通过固定在模拟舱周围的固定位置和角度的 摄像机实时拍摄动态变化的特征点,并通过SURF算法进行特征点提取,采用KLT算法进行特征点匹配,实现模拟舱快速运动产生位移变化时,对特征点进行跟踪匹配。
  6. 根据权利要求5所述的应用于飞行模拟器的混合现实方法,其特征在于:
    通过固定在模拟舱周围的固定位置和角度的摄像机得到模拟舱运动时的特征点的世界坐标,计算该时刻特征点相对于初始坐标的变化,从而得到模拟舱的位置关系,并通过VR头盔和定位器获得人体相对空间位置信息,根据在初始坐标系中的模拟舱和人体的位置信息,通过逆向转换对虚拟画面进行动态修正,并匹配内部虚拟场景与外部真实场景空间感和距离感。
  7. 根据权利要求5所述的应用于飞行模拟器的混合现实方法,其特征在于:
    在提取特征点时,所述特征点采用内圈为白色外圈为黑色的圆形黑白对比颜色的特征点,通过SURF算法进行特征点提取时,通过相机拍摄图像的高斯差分金字塔DOG并提取每组图像的第一层图像,对每组图像采取最大类间差分法进行自适应阈值分割获得二值图,并将该二值图作为约束条件进行特征点检测,使SURF检测在特征点的内部边缘区域进行。
  8. 根据权利要求6所述的应用于飞行模拟器的混合现实方法,其特征在于:
    采用KLT算法进行特征点匹配时,在高斯差分金字塔DOG的最高层计算光流,把计算得到的运动估值作为下一层计算的初始点,经过多次迭代直至最底层。
  9. 根据权利要求1所述的应用于飞行模拟器的混合现实方法,其特征在于:
    通过跟踪眼部的瞳孔和头部运动对虚拟画面进行分层渲染时:在unigine引擎中对场景的对不同区域和不同层次分别进行渲染,采用以眼部的瞳孔中心注视位置由内向外,依次进行高、中、低层次渲染。
  10. 根据权利要求1所述的应用于飞行模拟器的混合现实方法,其特征在于:
    采用分段线性灰度变化的图像增强法增强图像的可识别度,然后采用中值滤波降低噪点,通过迭代法快速分割特征区域,并通过面积特性和圆度值特性对瞳孔区域进行筛选,最后利用基于最小二乘法的椭圆拟合得到瞳孔的中心位置,通过瞳孔的中心位置获得双眼聚焦的范围。
PCT/CN2018/107555 2018-01-22 2018-09-26 一种应用于飞行模拟器的混合现实方法 WO2019140945A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810060883.3 2018-01-22
CN201810060883.3A CN108305326A (zh) 2018-01-22 2018-01-22 一种混合虚拟现实的方法

Publications (1)

Publication Number Publication Date
WO2019140945A1 true WO2019140945A1 (zh) 2019-07-25

Family

ID=62866294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107555 WO2019140945A1 (zh) 2018-01-22 2018-09-26 一种应用于飞行模拟器的混合现实方法

Country Status (2)

Country Link
CN (1) CN108305326A (zh)
WO (1) WO2019140945A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885355A (zh) * 2021-10-12 2022-01-04 江西洪都航空工业集团有限责任公司 一种瞄准吊舱模拟器

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305326A (zh) * 2018-01-22 2018-07-20 中国人民解放军陆军航空兵学院 一种混合虚拟现实的方法
CN109712224B (zh) * 2018-12-29 2023-05-16 海信视像科技股份有限公司 虚拟场景的渲染方法、装置及智能设备
CN110460831B (zh) 2019-08-22 2021-12-03 京东方科技集团股份有限公司 显示方法、装置、设备及计算机可读存储介质
JP7274392B2 (ja) * 2019-09-30 2023-05-16 京セラ株式会社 カメラ、ヘッドアップディスプレイシステム、及び移動体
CN111459274B (zh) * 2020-03-30 2021-09-21 华南理工大学 一种基于5g+ ar的针对非结构化环境的遥操作方法
CN111882608A (zh) * 2020-07-14 2020-11-03 中国人民解放军军事科学院国防科技创新研究院 一种增强现实眼镜跟踪相机和人眼之间的位姿估计方法
CN112308982A (zh) * 2020-11-11 2021-02-02 安徽山水空间装饰有限责任公司 一种装修效果展示方法及其装置
CN112562065B (zh) * 2020-12-17 2024-09-10 深圳市大富网络技术有限公司 一种虚拟世界中虚拟对象的渲染方法、系统以及装置
CN112669671B (zh) * 2020-12-28 2022-10-25 北京航空航天大学江西研究院 一种基于实物交互的混合现实飞行仿真系统
CN116205952B (zh) * 2023-04-19 2023-08-04 齐鲁空天信息研究院 人脸识别与跟踪的方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231790A (zh) * 2007-12-20 2008-07-30 北京理工大学 基于多个固定摄像机的增强现实飞行模拟器
US20160328884A1 (en) * 2014-11-27 2016-11-10 Magic Leap, Inc. Virtual/augmented reality system having dynamic region resolution
CN107154197A (zh) * 2017-05-18 2017-09-12 河北中科恒运软件科技股份有限公司 沉浸式飞行模拟器
CN108305326A (zh) * 2018-01-22 2018-07-20 中国人民解放军陆军航空兵学院 一种混合虚拟现实的方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568026B (zh) * 2011-12-12 2014-01-29 浙江大学 一种多视点自由立体显示的三维增强现实方法
US10231614B2 (en) * 2014-07-08 2019-03-19 Wesley W. O. Krueger Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance
CN105955456B (zh) * 2016-04-15 2018-09-04 深圳超多维科技有限公司 虚拟现实与增强现实融合的方法、装置及智能穿戴设备
CN106055113B (zh) * 2016-07-06 2019-06-21 北京华如科技股份有限公司 一种混合现实的头盔显示系统及控制方法
CN106843456B (zh) * 2016-08-16 2018-06-29 深圳超多维光电子有限公司 一种基于姿态追踪的显示方法、装置和虚拟现实设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231790A (zh) * 2007-12-20 2008-07-30 北京理工大学 基于多个固定摄像机的增强现实飞行模拟器
US20160328884A1 (en) * 2014-11-27 2016-11-10 Magic Leap, Inc. Virtual/augmented reality system having dynamic region resolution
CN107154197A (zh) * 2017-05-18 2017-09-12 河北中科恒运软件科技股份有限公司 沉浸式飞行模拟器
CN108305326A (zh) * 2018-01-22 2018-07-20 中国人民解放军陆军航空兵学院 一种混合虚拟现实的方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885355A (zh) * 2021-10-12 2022-01-04 江西洪都航空工业集团有限责任公司 一种瞄准吊舱模拟器

Also Published As

Publication number Publication date
CN108305326A (zh) 2018-07-20

Similar Documents

Publication Publication Date Title
WO2019140945A1 (zh) 一种应用于飞行模拟器的混合现实方法
US20190200003A1 (en) System and method for 3d space-dimension based image processing
US10269177B2 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
CN110874864A (zh) 获取对象三维模型的方法、装置、电子设备及系统
CN109615703B (zh) 增强现实的图像展示方法、装置及设备
KR102212209B1 (ko) 시선 추적 방법, 장치 및 컴퓨터 판독가능한 기록 매체
CN113366491B (zh) 眼球追踪方法、装置及存储介质
CN105869160A (zh) 利用Kinect实现三维建模和全息显示的方法及系统
CN107862718B (zh) 4d全息视频捕捉方法
CN107016730A (zh) 一种虚拟现实与真实场景融合的装置
CN115063562A (zh) 一种基于多视图三维重建的虚实融合增强现实呈现方法
CN104732586B (zh) 一种三维人体动态形体和三维运动光流快速重建方法
CN106981100A (zh) 一种虚拟现实与真实场景融合的装置
JP6799468B2 (ja) 画像処理装置、画像処理方法及びコンピュータプログラム
EP4315253A1 (en) Surface texturing from multiple cameras
Su et al. View synthesis from multi-view RGB data using multilayered representation and volumetric estimation
EP4050400B1 (en) Display apparatuses and methods incorporating image masking
Madden et al. Active vision and virtual reality
CN118521741A (zh) 人眼聚焦和偏离距离处的处理方法、系统、设备及介质
CN115861072A (zh) 三维重建的方法和存储装置
Saffold et al. Virtualizing Humans for Game Ready Avatars

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18901514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18901514

Country of ref document: EP

Kind code of ref document: A1