WO2022121645A1 - 一种教学场景中虚拟对象的真实感生成方法 - Google Patents

一种教学场景中虚拟对象的真实感生成方法 Download PDF

Info

Publication number
WO2022121645A1
WO2022121645A1 PCT/CN2021/131211 CN2021131211W WO2022121645A1 WO 2022121645 A1 WO2022121645 A1 WO 2022121645A1 CN 2021131211 W CN2021131211 W CN 2021131211W WO 2022121645 A1 WO2022121645 A1 WO 2022121645A1
Authority
WO
WIPO (PCT)
Prior art keywords
teaching
scene
objects
virtual
real
Prior art date
Application number
PCT/CN2021/131211
Other languages
English (en)
French (fr)
Inventor
杨宗凯
钟正
吴砥
吴珂
Original Assignee
华中师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华中师范大学 filed Critical 华中师范大学
Publication of WO2022121645A1 publication Critical patent/WO2022121645A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/10Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations all student stations being capable of presenting the same information simultaneously
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the invention belongs to the teaching application field of mixed reality (Mixed Reality, MR) technology, and more particularly relates to a realism generation method of virtual objects in a teaching scene.
  • Mixed reality Mated Reality
  • VR Virtual Reality
  • MR equipment represented by HoloLens holographic glasses can not only realize the superposition of virtual and real scenes, but also track the user's position through various sensors, forming a composition between users, learning resources and the real environment.
  • the enhanced teaching scene based on MR technology can break some of the limitations of using large screens or electronic whiteboards in existing classroom teaching, and effectively improve the learning effect; its immersion, interactivity and intelligence characteristics can effectively stimulate users' enthusiasm for learning and subjective initiative, will profoundly change the teaching environment, teaching content, teaching methods and teaching modes.
  • the imaging system constructed by combining MR, holographic projection and other technologies has become the display form of the next-generation intelligent teaching environment and has broad application prospects .
  • the present invention provides a method for generating a sense of reality of virtual objects in a teaching scene, and provides a new and complete way and way to meet the needs of enhancing the effect of strong interaction in the teaching environment on the realism of virtual objects.
  • a realism generation method for virtual objects in a teaching scene comprising the following steps:
  • Teaching space perception formulate the teaching environment depth data collection specification, collect the depth data of the teaching space from multiple trajectories and multiple angles; use the semantic segmentation algorithm to extract and generate the 3D model of each object, and construct the octree index structure of the scene.
  • Real-time perception of changes in scene objects within the field of view use heuristic algorithms and cluster analysis methods to extract feature points and lines of objects, and use spatial positioning and real-time mapping technology to optimize understanding of teaching scenes and models;
  • step (1) teaching space perception specifically includes the following steps:
  • (1-1-1) Formulate the teaching space depth data collection specification, and formulate the collection route and moving speed of the active ranging depth sensor for the teaching space with different area and length-width ratio, and collect each object in the teaching space from multiple trajectories and multiple angles. depth data;
  • (1-2) Perception of teaching space construct the surface grid model of the teaching space according to the depth synthesis map, and use the semantic segmentation algorithm to extract and generate the 3D model of each object; use the octree structure to divide the teaching space scene, and construct the index of the scene structure, to achieve fast intersection and collision processing between objects; track the changes of the teacher's head movement and line of sight direction, and perceive the parameter changes of scene objects within the field of view in real time;
  • (1-2-1) Object model segmentation, construct the surface mesh model of the teaching space according to the synthesized depth map, and use the semantic segmentation algorithm to extract and generate the 3D model of each object; Orientation and attitude feature information, create a cuboid bounding box, and use the YOLO algorithm to quickly locate its specific location;
  • (1-3) Environment understanding use heuristic algorithm to extract the feature points of each object in the teaching environment, set them as spatial anchor points, optimize the understanding of the scene and model of the teaching space; analyze the surface geometric features of each object model, and use cluster analysis method, extract its feature plane; use spatial positioning and real-time mapping technology to obtain 3D surface models of visible objects in teaching scenes in real time;
  • the step (2) generating a photorealistic virtual object specifically includes the following steps:
  • the real display effect is generated.
  • the bilinear interpolation algorithm is used to calculate the illumination intensity of the adjacent points, and the result is applied to the virtual object to realize the illumination effect of virtual and real fusion;
  • ShadowMap technology to generate realistic shadow effects of virtual objects in real time in the teaching space;
  • Shadow generation according to the light source type, quantity, and position parameters in the teaching space, add a depth virtual camera at the light source position, determine the scene object whose bounding box falls within the shadow casting range of the virtual object, and use ShadowMap technology to create Deep texture shadows for surface models of these objects;
  • the shadow changes dynamically.
  • the shadow casting area in the teaching environment is updated in real time, and the shadow slope ratio is calculated according to the depth offset benchmark. , to eliminate the shadow aliasing effect, and to realistically express the real-time dynamic shadow effect;
  • (2-3) Occlusion processing, judging the positional relationship between the teacher and each object in the enhanced teaching scene, based on the rendering mechanism of Raycasting, sorting the objects according to the value of the depth buffer; using the maximum flow/minimum cut tracking method based on the optical flow method , track the outline of each object in real time, and determine their occlusion relationship; by translating, stretching, and rotating simple planes, 3D meshes in complex areas in space are occluded, simplifying the judgment of each object's occlusion relationship;
  • step (3) the generation of dynamic interactive real effects specifically includes the following steps:
  • the multi-modal interaction algorithm is adopted to support teachers to manipulate virtual objects in various interactive ways; to set the somatosensory effect of interactive prompts, the higher the quality, the smaller the somatosensory offset level of the virtual hand; through the line of sight Interactive prompts of targets and virtual hands guide teachers to combine perceived spatial cues with cognitive structures;
  • Multi-modal interaction method build a visual, auditory and tactile multi-modal interaction fusion algorithm in the holographic imaging environment, and support teachers to push, pull, Shake and move virtual objects to enhance the authenticity of interactive operations in the teaching process;
  • the teacher uses gestures and line of sight interaction to move, rotate and zoom to enhance the virtual objects in the teaching scene, and use the scan line algorithm to calculate their next position, attitude and scale, and judge whether they are related to other objects. No collision, if so, stop moving or perform an obstacle avoidance operation;
  • Formulate a deep data collection specification for teaching space, use semantic segmentation algorithm to extract and generate 3D models of each object, perceive changes in scene objects within the field of view in real time, optimize understanding of teaching scenes and models; place and move in real teaching scenes In the scene, the lighting and shadow effects of virtual and real integration can be realized, and the mask plane is used to block complex areas, which simplifies the judgment of the occlusion relationship of each object. Simultaneously locate, map and dynamically present interactive results on multi-user terminals, and use custom Shaders to optimize the interactive rendering process. With the maturity of 5G, MR, holographic imaging and other technologies, the requirements for the realism generation and display of virtual objects are getting higher and higher.
  • FIG. 1 is a flowchart of a method for generating a sense of reality of virtual objects in a teaching scene according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of the depth data collection route and points in the teaching space.
  • Figure 3 is a composite map of multi-site teaching space depth data.
  • Figure 4 is a schematic diagram of the segmentation of the 3D model of the teaching space.
  • Figure 5 is a schematic diagram of the structure of a 4-layer convolutional neural network.
  • FIG. 6 is a schematic diagram of a scene of subdivision of an octree structure in an embodiment of the present invention.
  • Fig. 7 is an effect diagram of depth texture shadow generation, wherein 1 is the virtual object, 2 is the virtual object shadow, 3 is the incident light, and 4 is the reflected light.
  • Figure 8 is a schematic diagram of the dynamic processing of shadow shift, wherein 1 is the shadow distortion plane, 2 is the pixel point, 3 is the center point, and L is the distance from the light source to the center point.
  • FIG. 9 is an effect diagram of the occlusion relationship between virtual and real objects.
  • Figure 10 is a schematic diagram of a 3D mesh responsible for occlusion in a complex teaching space area.
  • Figure 11 is a schematic diagram of the creation of an irregular collider.
  • an embodiment of the present invention provides a method for generating a sense of reality of a virtual object in a teaching scene, including the following steps:
  • (1) Perception of teaching space Formulate the acquisition specification of depth sensor, collect the depth information of the teaching space from multiple angles, use the semantic segmentation algorithm to generate the 3D model of each object, build the octree index structure of the scene, and perceive the changes of each object in the teaching space; use the heuristic algorithm to extract The characteristics of the object, through the spatial anchor setting, optimize the understanding of teaching scenes and models.
  • the perception of the teaching space specifically includes the following steps:
  • (1-1-1) Develop teaching space depth data collection specifications. For teaching spaces with different areas and ratios of length and width, the collection specifications of the acquisition route and moving speed of the active ranging depth sensor (Time of Flight, TOF) are formulated, and the depth data of each object in the teaching space is collected from multiple trajectories and multiple angles.
  • TOF Time of Flight
  • (1-1-2) Data collection of teaching space depth information. Wear the TOF sensor on the teacher's head. According to the requirements of the collection specification, according to the layout of walls, desks, chairs, blackboards, and podiums in the teaching space, set the collection route, collection position and moving speed of the depth data of the teaching space to obtain the teaching space.
  • the depth data of uses single-precision floating-point numbers to record the coordinate values of each depth point, and the unit is meters.
  • (1-2) Perception of teaching space According to the depth synthesis map, the surface mesh model of the teaching space is constructed, and the 3D model of each object is extracted and generated by using the semantic segmentation algorithm (as shown in Figure 4); the teaching space scene is divided by the octree structure, and the index structure of the scene is constructed. Realize the processing of fast intersection and collision between objects; track the changes of the teacher's head movement and line of sight direction, and perceive the parameter changes of scene objects within the field of view in real time.
  • the pooling layers P S , P' S between the levels can reduce the resolution levels of the depth map.
  • the output layer O' 2 is the fourth layer, which adopts a layer-based intelligent supervision training method, and the training results of the previous layer are used for the content extraction of the latter layer.
  • the low-resolution layers provide prior knowledge for the high-resolution layers, and on-the-fly sampling can use information from a larger receptive field for final decision-making.
  • each object obtains the circumscribed minimum cuboid bounding box, create a cuboid bounding box for each object, and use the YOLO algorithm to quickly locate its specific position.
  • (1-2-2) Scene organization As shown in Figure 6, according to the distribution range of each object in the teaching space, the breadth-first algorithm is used to subdivide the boundary cube of the scene model, and the octree structure is used to subdivide and iterate each object in the teaching scene to construct the index of the teaching space scene. Structure, through the pooling operation, the octant of depth d is connected to the sub-octet of depth d+1 after downsampling calculation, and the label is specified for the non-empty octant and the label vector is stored.
  • Extract the cluster center of the feature plane use the spatial mapping method to fit the feature plane, and use the convex hull algorithm to extract the boundary of each plane.
  • Feature planes improve understanding of teaching spaces.
  • u(k) ⁇ R, y(k) ⁇ R represent the input and output of the data at time k, respectively
  • is the weight factor
  • is the step factor, which is used to limit the amount of change in the input of the control virtual object
  • the time-varying parameter ⁇ c (k) ⁇ R, y * (k+1) is the expected output result, and the position, attitude and zoom parameters are adjusted adaptively in real time to realize the virtual-real fusion display of the enhanced teaching scene.
  • the bilinear interpolation algorithm is used to calculate the illumination intensity of the adjacent sampling points, and the result is applied to the virtual object to realize the illumination effect of virtual and real fusion;
  • the ShadowMap technology is used to generate real-time in the teaching space. Realistic shadow effects for virtual objects.
  • I k a I a +k d (n ⁇ l)I d +k s (r ⁇ v) ⁇ I s
  • is the surface roughness of the object.
  • Control the range and sharpness of the bright light area according to the relationship between incident light and reflected light calculate the increased light value according to the distance, and use the indirect light effect of the sampling points in the scene to illuminate the virtual object. Since the coordinates of the target image are set to single-precision floating-point numbers There is a non-integer remapping, the source image is corresponding to the side length ratio, and the bilinear interpolation algorithm is used to calculate the light intensity value of the adjacent sampling points, and the result is applied to the virtual object to enhance the lighting fusion effect in the virtual teaching scene, making the scene More real and more three-dimensional.
  • (2-2-2) Shadow generation According to the light source type, quantity, position and other parameters in the teaching space, add a depth virtual camera at the light source position and set its view cone range, render the entire teaching scene from the light source perspective, obtain the shadow effect map of the scene, and connect the bounding box vertices according to each object. Coordinates, determine the scene object whose bounding box falls in the shadow casting range of the virtual object, traverse and copy the texture shadow data from the depth buffer, and generate the associated depth texture shadow on the feature plane (as shown in Figure 7).
  • Obtain the direction and intensity of the light source compare the brightness of the surrounding environment and the surface of the virtual object, and adjust the dynamic shadow change of the virtual object under the superposition effect of direct lighting and indirect lighting.
  • ,w) represents the indirect light irradiance
  • L(x,w) is the irradiance along the direction w at the spatial position x.
  • the rendering mechanism based on Raycasting obtains the outline of the object in the foreground, judges its distance and positional relationship from the camera, and completes the depth information of each object in the scene through the depth value gradient distribution method. Sorting, constantly calibrating their values in the depth buffer, performing real-time depth sorting of each object.
  • p(x,y) represents the position of a certain object in space. From the teacher's perspective, the contours of each object are tracked in real time and accurately, and the depth values of the scene foreground object and the virtual object are compared to determine the occlusion relationship between the virtual and real objects as shown in Figure 9. and range.
  • Multi-modal interaction algorithm is adopted to support teachers to manipulate virtual objects in various interactive ways; the somatosensory effect of interactive prompts is set. The higher the quality, the smaller the somatosensory offset level of the virtual hand; Teachers combine perceived spatial cues with cognitive structures.
  • Multimodal interaction mode Build a multimodal interactive fusion algorithm of vision, hearing, touch, etc. in the holographic imaging environment, obtain the bounding box of virtual objects, and support teachers to push, pull, shake, and move virtual objects in enhanced teaching scenes through interactive operations such as gestures, sight lines, and heads. Objects to enhance the authenticity of interactive operations in the teaching process.
  • (3-1-3) Interactive guidance. Establish a spatial cognition that enhances the representation of teaching scenes, and guide teachers to combine the perceived spatial cues with cognitive structures through interactive prompts from sight targets and virtual hands. , to match the perspectives of teachers and students to render the teaching scene, improve the self-positioning and subjective behavior of teachers and students, enhance the natural transition from the real teaching scene to the virtual environment, form a matching spatial situation model, and enhance the perceptual experience of teachers.
  • (3-2-1) Synchronous positioning of virtual objects. Under the guidance of the virtual hand, sight target and other prompts, calculate the angle between the teacher's sight and the surface normal vector of the virtual object. According to the needs of teaching activities, the teacher will click, move, rotate, and zoom the virtual object in the enhanced teaching scene to calculate the movement.
  • the transformation matrix of the front and rear positions, attitudes and scales is used to locate and update the information changes of virtual objects in different terminals.
  • X Ai and X Bi represent the point coordinates of the collision body, and use the Euclidean distance f(p(A i , B j )) between two target features in three-dimensional space as the judgment basis to quickly detect the collision between virtual objects and other objects.
  • ⁇ ⁇ is the azimuth angle from the current position to the obstacle
  • v, ⁇ , ⁇ , g are the moving speed, deflection angle, apparent rotation angle and gravitational acceleration, respectively
  • is the distance from the camera to the obstacle, and the obstacle avoidance operation is performed.

Abstract

本发明属于混合现实技术的教学应用领域,提供一种教学场景中虚拟对象的真实感生成方法,包括:(1)教学空间的感知:采集教学空间的深度数据,实时感知视场范围内场景对象的变化;(2)真实感虚拟对象生成:通过采集教学场景中光照强度,实现虚实融合的光照效果,采用ShadowMap实时生成虚拟对象的阴影效果;(3)动态交互真实效果生成:通过视线靶点、虚拟手的交互提示设置,引导教师使用多模态算法完成与虚拟对象的实时交互。本发明方法提出了一个环境感知、对象生成和动态交互的方案,建立了一套从深度数据采集、空间感知到环境理解的教学环境感知方案,支持真实感虚拟对象的生成和实时交互。

Description

一种教学场景中虚拟对象的真实感生成方法 技术领域
本发明属于混合现实(Mixed Reality,MR)技术的教学应用领域,更具体地,涉及一种教学场景中虚拟对象的真实感生成方法。
背景技术
虚拟现实(Virtual Reality,VR)技术属于信息前沿技术的3大技术之一。MR作为VR技术的一大分支,以HoloLens全息眼镜为代表的MR设备不仅能够实现虚实场景的叠加,还能通过各种传感器追踪使用者的位置,在使用者、学习资源和真实环境之间构成一个交互反馈的信息回路。基于MR技术构建的增强教学场景,可打破现有课堂教学中使用大屏或电子白板存在的某些局限,有效提升学习效果;其沉浸感、交互性和智能性特征能有效激发使用者学习积极性和主观能动性,将深刻改变教学环境、教学内容、教学方法和教学模式。随着我国5G商业化的加深发展,高带宽、低时延的网络环境的进一步普及,结合MR、全息投影等技术构建的成像系统,成为下一代智能教学环境的显示形态,拥有广阔的应用前景。
但目前增强教学场景中虚拟对象的真实感生成方面还存在诸多问题:(1)虚拟对象对真实环境的理解不够,使用者通过MR设备能够感知教学环境,但虚拟对象缺乏相应的能力;(2)真实感不足,常常出现虚拟对象穿透教学空间中对象,且光影、阴影效果不真实;(3)交互体验不够逼真,交互设置、引导方式未能充分考虑使用者的感受,较难在多终端定位和映射。这些缺陷限制虚拟对象在增强教学场景中的应用。
发明内容
针对现有技术的以上缺陷或改进需求,本发明提供了一种教学场景中虚拟对象的真实感生成方法,为混合增强教学场景中虚拟对象的真实感生成提供一种新的、完整的途径和方式,满足增强教学环境强互动对虚拟对象真实感效果的需要。
本发明的目的是通过以下技术措施实现的。
一种教学场景中虚拟对象的真实感生成方法,包括以下步骤:
(1)教学空间感知:制定教学环境深度数据采集规范,从多轨迹、多角度采集教学空间的深度数据;利用语义分割算法提取、生成各对象的3D模型,构建场景的八叉树索引结构,实时感知视场范围内场景对象的变化;利用启发式算法、聚类分析方法提取对象的特征点和线,采用空间定位和即时成图技术,优化理解教学场景和模型;
(2)真实感虚拟对象生成;教师使用多种交互方式放置、移动虚拟对象,自适应显示其位置、姿态和尺寸;通过采集教学场景中光照强度,实现虚实融合的光照效果,采用ShadowMap实时生成虚拟对象的阴影效果;基于Raycasting的渲染机制,判别教学场景中各对象的位置与遮挡关系,使用蒙版平面遮挡复杂区域,简化各对象遮挡关系的判别;
(3)动态交互真实效果生成;通过视线靶点、虚拟手的交互提示设置,引导教师使用多模态算法完成与虚拟对象的实时交互;在多终端实现交互结果的同步定位、映射和动态呈现;构建不同对象的碰撞体,根据碰撞情况,执行相应操作,设计自定义Shader,优化交互渲染流程。
在上述技术方案中,步骤(1)教学空间感知具体包括以下步骤:
(1-1)教学环境深度数据采集;制定深度传感器的采集规范,包括采集路线、移动速度;根据采集规范要求,从多轨迹、多角度采集教学空间各对象的深度数据;使用右手坐标系描述深度合成图中各对象的位置和姿态;
(1-1-1)制定教学空间深度数据采集规范,针对面积、长宽比例不同的教学空间,制定主动测距深度传感器的采集路线、移动速度,从多轨迹、多角度采集教学空间各对象的深度数据;
(1-1-2)教学空间深度信息数据采集,将TOF传感器佩戴在教师头部,根据采集规范的要求,扫描教学空间,快速采集墙壁、桌椅、黑板、讲台的深度数据,使用单精度浮点数记录各深度点的坐标数值,单位为米;
(1-1-3)位置与姿态的描述,通过迭代对齐算法,精确计算多站点采集的教学空间及其对象的深度图,将它们拼接到统一坐标系中,生成深度合成图,采用右手坐标系描述教学环境中各对象的位置坐标(x,y,z)和朝向姿态(tx,ty,tz);
(1-2)教学空间感知,根据深度合成图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各对象的3D模型;利用八叉树结构分割教学空间场景,构建场景的索引结构,实现对象之间快速求交、碰撞处理;跟踪教师头部运动和视线方向的变化,实时感知视场范围内场景对象的参数变化;
(1-2-1)对象模型分割,根据合成后的深度图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各对象的3D模型;根据对象的长宽高、空间位置、朝向、姿态特征信息,创建长方体包围盒,利用YOLO算法快速定位其具体位置;
(1-2-2)场景组织,利用八叉树结构分割教学空间场景,构建场景的索引结构,基于各对象包围盒的坐标信息,分割和预处理教学场景中各对象;根据包围盒的位置关系,实现 各对象之间快速求交、碰撞处理;
(1-2-3)教学场景感知,结合加速度传感器、陀螺仪和深度感知摄像头,跟踪教师头部运动和视线方向的变化,实时感知新视场范围内的场景对象,确定对象的位置、姿态、尺寸,以及相对于初始状态变换矩阵的变化;
(1-3)环境理解,利用启发式算法提取教学环境中各对象的特征点,设置为空间锚点,优化理解教学空间的场景和模型;分析各对象模型的表面几何特征,采用聚类分析方法,提取其特征平面;运用空间定位和即时成图技术,实时获取教学场景中可见对象的3D表面模型;
(1-3-1)特征点理解,利用启发式算法提取教学环境中各对象的特征点,将它们设置为空间锚点,以锚点为圆心,3米之内的模型不随视场宽高比的变化而变形,通过锚点优化理解教学空间的场景和模型;
(1-3-2)特征平面理解,分析模型的表面几何特征,采用聚类分析方法,提取各对象模型的特征平面,根据教师位置和视线方向的变化,实时获取场景中的可见特征平面,增进对教学空间的理解;
(1-3-3)特征对象理解,运用空间定位和即时成图技术,定位场景空间中可见对象的坐标和姿态,根据教师位置和视线方向的变化,实时获取教学环境中可见对象的3D表面模型,剔除不可见对象,提高对教学环境理解的处理速度。
在上述技术方案中,步骤(2)真实感虚拟对象生成具体包括如下步骤:
(2-1)虚拟对象逼真显示,教师使用语音、手势交互方式在教学空间中放置、移动虚拟对象,运用感知摄像头追踪其位置、姿态和缩放比例的变化,通过求交、碰撞检测步骤,实时自适应调整位置、姿态和缩放参数,实现增强教学场景中虚拟对象的逼真显示;
(2-1-1)虚拟对象的放置,基于对真实教学环境的理解,结合教师的视线焦点与方向,使用语音、手势交互方式在教学空间中选择虚拟对象的定位点,综合考虑其所受物理规则的限制,以适当的姿态、缩放比例放置到教学空间的相应位置;
(2-1-2)虚拟对象的移动,根据教学任务的需要,教师通过语音、视线、手势方式将虚拟对象移动到教学空间的墙壁、地板、桌椅上或空中某处,通过感知摄像头追踪其在教学环境中的6DoF变化,获取新的位置、姿态和缩放参数;
(2-1-3)自适应设置,在增强教学场景中,虚拟对象遵循与真实世界相似的物理规则,放置或移动虚拟对象时,通过求交、碰撞检测步骤,实时自适应地调整位置、姿态和缩放参数,实现增强教学场景的虚实融合显示;
(2-2)真实显示效果生成,通过收集教学场景中采样点的光照强度,运用双线性内插算法计算邻近点的光照强度,并将结果作用于虚拟对象,实现虚实融合的光照效果;采用ShadowMap技术在教学空间中实时生成虚拟对象的逼真阴影效果;
(2-2-1)光影效果生成,通过在教学场景中设置采样点,收集周围环境的光照信息,运用双线性内插算法计算邻近点的光照强度,并将插值结果作用于虚拟对象,实现增强教学场景中光照融合效果,令场景更加真实,更具有立体感;
(2-2-2)阴影生成,根据教学空间中光源类型、数量、位置参数,在光源位置添加深度虚拟相机,确定包围盒落在虚拟对象阴影投射范围中的场景对象,利用ShadowMap技术,创建这些对象表面模型的深度纹理阴影;
(2-2-3)阴影动态变化,随着虚拟对象在教学空间中位置、姿态、缩放比例的变化,实时更新其在教学环境中的阴影投射区域,计算阴影坡度比例,依据深度偏移基准的设置,消除阴影锯齿效果,逼真地表现实时动态阴影效果;
(2-3)遮挡处理,判断教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制,按照深度缓冲区的值完成对象排序;采用基于光流法的最大流/最小割跟踪方法,实时追踪各对象的轮廓,判断它们的遮挡关系;通过平移、拉伸、旋转简单平面,遮挡空间中复杂区域的3D网格,简化各对象遮挡关系的判别;
(2-3-1)场景对象的深度排序,根据教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制,判断它们离摄像机的远近、位置关系,不断校准它们在深度缓冲区的值,执行各对象的实时深度排序;
(2-3-2)虚实遮挡关系的判断,利用八叉树结构判断增强教学场景中各对象的空间位置关系,采用基于光流法的最大流/最小割跟踪方法,从教师视角实时、精确地追踪各对象的轮廓,确定虚实对象之间的遮挡关系和范围;
(2-3-3)遮挡平面添加,针对教学空间中难以识别的白色墙壁区域、光照复杂或不可穿越的区域情形,创建一些隐藏显示的简单平面,通过平移、旋转、拉伸操作,遮挡教学空间中这些复杂区域的3D网格结构,简化真实空间各对象遮挡关系的判别。
在上述技术方案中,步骤(3)动态交互真实效果生成具体包括如下步骤:
(3-1)虚拟对象的交互,采用多模态交互算法,支持教师多种交互方式操纵虚拟对象;设置交互提示的体感效果,质量越大,虚拟手的体感偏移等级越小;通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合;
(3-1-1)多模态交互方式,构建全息成像环境中视觉、听觉、触觉多模态交互融合算法, 支持教师通过手势、视线、头部交互操作在增强教学场景中推、拉、摇、移虚拟对象,增强教学过程中交互操作的真实性;
(3-1-2)交互提示体感效果设置,根据虚拟对象的性质,估算其体积、密度和质量,基于物理重力规则,设置交互提示的体感效果:质量越大,虚拟手的体感偏移等级越小,越不产生偏移感的错觉,增强教师的真实感体验;
(3-1-3)交互引导,在增强教学场景中通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合,增强其从真实教学场景向虚拟环境转换的自然过渡,形成相匹配的空间情境模型,增进教师的知觉体验;
(3-2)实时交互,获取教师移动虚拟对象的变化矩阵,在不同终端定位、更新其变化;利用SLAM技术,将变换后的虚拟对象映射到不同终端的本地化教学环境,同步映射交互结果;更新光影、阴影效果,实现教学环境中虚拟对象的真实感体验;
(3-2-1)虚拟对象的同步定位,在虚拟手、视线靶点提示引导下,根据教学活动的需要,教师会在增强教学场景中移动虚拟对象,计算移动前后的位置、姿态和比例尺寸的变换矩阵,用于在不同终端定位、更新虚拟对象的新变化;
(3-2-2)交互结果的同步映射,针对师生用户共享增强教学场景的需求,利用SLAM技术,将虚拟对象的变换参数映射到学生终端的本地化教学环境,实现在不同终端的一致映射以及与其它场景对象的相对位置映射;
(3-2-3)交互结果动态呈现,教师运用多模态交互方式操纵虚拟对象,在增强教学环境使用全息成像系统呈现其新位置、姿态和缩放比例,根据与光源的相对关系,更新光影、阴影效果,令教学环境中虚拟对象产生真实感体验效果;
(3-3)交互优化,依据各对象表面网格模型的形状,构建不同的碰撞体;采用扫描线算法计算虚拟对象的下一位置,判断会否与其它对象发生碰撞,执行相应操作;设计自定义Shader,采用片段着色器渲染纹理像素,重构顶点渲染流程;
(3-3-1)碰撞检测,依据增强教学场景中各对象表面网格模型的形状,构建不同的碰撞体,采用Opcode方法快速检测虚拟对象与其它对象的碰撞;
(3-3-2)避障处理,教师使用手势、视线交互方式移动、旋转和缩放增强教学场景中的虚拟对象,采用扫描线算法计算其下一位置、姿态和比例,判断与其它对象会否发生碰撞,如发生,则停止移动或执行规避障碍操作;
(3-3-3)交互渲染优化,增强教学场景的交互过程中,综合考虑渲染管道中带宽、缓存行为和滤波指标,设计自定义Shader,采用片段着色器渲染纹理像素,重构顶点渲染流程, 满足光影、阴影、动画实时动态更新要求。
本发明的有益效果在于:
制定一个针对教学空间的深度数据采集规范,利用语义分割算法提取、生成各对象的3D模型,实时感知视场范围内场景对象的变化,优化理解教学场景和模型;在真实教学场景中放置、移动场景时,可实现虚实融合的光照和阴影效果,采用蒙版平面遮挡复杂区域,简化各对象遮挡关系的判别;通过视线靶点、虚拟手的提示设置,引导教师完成与虚拟对象的实时交互,将交互结果同步定位、映射和动态呈现在多用户终端,采用自定义Shader,优化交互渲染流程。随着5G、MR、全息成像等技术的日臻成熟,对虚拟对象真实感生成和显示的要求越来越高,本发明有助于满足增强教学环境强互动对虚拟对象真实感效果的需要。
附图说明
图1是本发明实施例中教学场景中虚拟对象的真实感生成方法流程图。
图2是教学空间中深度数据采集路线和点位的示意图。
图3是多站点教学空间深度数据合成图。
图4是教学空间的3D模型分割示意图。
图5是4层卷积神经网络结构示意图。
图6是本发明实施例中八叉树结构细分场景示意图。
图7是深度纹理阴影生成效果图,其中1为虚拟对象,2为虚拟对象阴影,3为入射光,4为反射光。
图8是阴影偏移动态处理示意图,其中1为阴影失真平面,2为像素点,3为中心点,L为光源到中心点距离。
图9是虚实对象间遮挡关系效果图。
图10是复杂教学空间区域中遮挡负责3D网格示意图。
图11是不规则碰撞体的创建示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施案例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施案例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只 要彼此之间未构成冲突就可以相互组合。
如图1所示,本发明实施例提供一种教学场景中虚拟对象真实感的生成方法,包括如下步骤:
(1)教学空间的感知。制定深度传感器的采集规范,从多角度采集教学空间的深度信息,利用语义分割算法生成各个对象的3D模型,构建场景的八叉树索引结构,感知教学空间各对象的变化;利用启发式算法提取对象的特征,通过空间锚点设置,优化理解教学场景和模型。
所述教学空间的感知具体包括如下步骤:
(1-1)教学环境深度数据采集。制定深度传感器的采集规范,涉及采集路线、移动速度等;根据采集规范要求,从多轨迹、多角度采集教学空间各对象的深度数据;使用右手坐标系描述深度合成图中各对象的位置和姿态。
(1-1-1)制定教学空间深度数据采集规范。针对面积、长宽比例不同的教学空间,制定主动测距深度传感器(Time of Flight,TOF)的采集路线、移动速度等采集规范,从多轨迹、多角度采集教学空间各对象的深度数据。
(1-1-2)教学空间深度信息数据采集。将TOF传感器佩戴在教师头部,依据采集规范的要求,根据教学空间中墙壁、桌椅、黑板、讲台等布局情况,设定教学空间深度数据的采集路线、采集位置与移动速度,获取教学空间的深度数据,使用单精度浮点数记录各深度点的坐标数值,单位为米。
(1-1-3)位置与姿态的描述。通过迭代对齐算法,精确计算、合成多站点采集的教学空间及其对象的深度图(如图3所示),将它们拼接到统一坐标系中,生成深度合成图,采用右手坐标系描述教学环境中各对象(墙壁,桌椅、黑板、讲台等)的位置坐标(x,y,z)和朝向姿态(tx,ty,tz),图3右下角表示3个轴的正方向。
(1-2)教学空间感知。根据深度合成图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各对象的3D模型(如图4所示);利用八叉树结构分割教学空间场景,构建场景的索引结构,实现对象之间快速求交、碰撞等处理;跟踪教师头部运动和视线方向的变化,实时感知视场范围内场景对象的参数变化。
(1-2-1)对象模型分割。根据深度合成图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各个对象的3D模型:
设计一个4层级神经卷积网络,提取深度信息,运用定向深度直方图算子分割深度数据中各个对象数据:
如图5所示,前3层级s={0,1,2}分别对应输入、隐藏卷积和输出深度图(即I s,C s,O s)。层级之间的汇聚层P S,P′ S,可以减少深度图的分辨率层次。输出层O′ 2是第4层级,采用基于层级的智能监督训练方法,前一层级的训练成果用于后一层级的内容提取。低分辨率的层级为高分辨率层级提供先验知识,即时采样可将较大感受域的信息用于最终决策。
将像素p的梯度方向和大小定义为α p和n p,根据8邻域的值量化|α p|和n p,生成各个深度点的方向强度直方图,再应用高斯模糊滤镜量化整个深度图;最后在用L 2-hys范式规范化直方图。所有深度图被规范成零均值和单位变量,将该过程重复应用到每个层级,样本地图和对象分割效果如图4所示。
根据各对象的长宽高、空间位置、朝向、姿态等特征信息,获取外接最小长方体包围盒,为各对象创建长方体包围盒,利用YOLO算法快速定位其具体位置。
(1-2-2)场景组织。如图6所示,根据教学空间中各对象的分布范围,采用广度优先算法,细分场景模型的边界立方体,利用八叉树结构细分、迭代教学场景中各对象,构建教学空间场景的索引结构,通过池化操作将深度为d的八分体经过下采样计算后连接到深度为d+1的子八分体,为非空八分体指定标签并存储标签矢量,在第d深度的一个非空节点处定义索引j,计算第d+1深度处其第一个子八分体的索引k=8*(L d[j]-1),对各对象姿态信息构建场景的索引结构,分割和预处理增强教学场景中各对象,根据包围盒的位置关系(包含、相交等),实现各对象之间快速求交、碰撞等处理。
(1-2-3)教学场景感知。结合加速度传感器、陀螺仪和深度感知摄像头,采用椭圆拟合做瞳孔定位,跟踪教师头部运动和视线方向的变化,实现眼动数据与参考平面上凝视点之间的鲁棒标定,建立头部运动情况下的空间映射模型,实时感知新视场范围内的场景对象,确定对象的位置、姿态、尺寸,以及相对于初始状态变换矩阵的变化。
(1-3)环境理解。利用启发式算法提取教学环境中各对象的特征点,设置为空间锚点,优化理解教学空间的场景和模型;分析各对象模型的边表面几何特征,采用聚类分析方法,提取其特征平面;运用空间定位和即时成图技术,实时获取教学场景中可见对象的3D表面模型。
(1-3-1)特征点的理解。利用启发式算法提取教学环境中墙壁角点,桌椅、讲台、黑板等对象外接包围盒的顶点,创建空间锚点;以锚点为圆心,计算与其他对象锚点的空间距离,如距离小于3米,对象的模型显示效果保持不变,通过锚点优化理解教学空间的场景和模型。
(1-3-2)特征平面的理解。分析墙壁、桌椅、黑板、讲台等模型的表面几何特征,采用聚类分析方法确定各对象中特征平面的分布情况,选取欧式距离作为相似度指标,使用k个点的聚类平方和最小化指标:
Figure PCTCN2021131211-appb-000001
Figure PCTCN2021131211-appb-000002
为新中心点即质心,x i为各对象初始聚类中心,
Figure PCTCN2021131211-appb-000003
提取特征平面的聚类中心,使用空间映射方法拟合特征平面并采用凸壳算法提取每个平面的边界,根据教师位置和视线方向的变化,更新可见视场的范围,实时获取该范围中的特征平面,增进对教学空间的理解。
(1-3-3)特征对象的理解。运用空间定位和即时成图技术,定位场景空间中可见对象的坐标和姿态,根据教师位置和视线方向的变化,实时获取教学环境中可见对象的3D表面模型,剔除不可见对象,提高对教学环境理解的处理速度。
(2)真实感虚拟对象生成。教师使用多种交互方式放置、移动虚拟对象,自适应显示其位置、姿态和尺寸;通过采集教学场景中光照强度,实现虚实融合的光照效果,采用ShadowMap实时生成虚拟对象的阴影效果;基于Raycasting的渲染机制,判别教学场景中各对象的位置与遮挡关系,使用蒙版平面遮挡复杂区域,简化各对象遮挡关系的判别。
(2-1)虚拟对象逼真显示。教师使用语音、手势等交互方式在教学空间中放置、移动虚拟对象,运用感知摄像头追踪其位置、姿态和缩放比例的变化,通过求交、碰撞检测等步骤,实时自适应调整位置、姿态和缩放等参数,实现增强教学场景中虚拟对象的逼真显示。
(2-1-1)虚拟对象的放置。基于对真实教学环境的理解,结合教师的视线焦点与方向,使用语音、手势等交互方式在教学空间中选择虚拟对象的定位点;考虑物理规则为各虚拟对象添加刚体属性,设置在其他对象表面的摩擦力、弹性、空气阻力、重力等约束条件,以适当的姿态、缩放比例将虚拟对象放置到教学空间的相应位置。
(2-1-2)虚拟对象的移动。根据教学任务的需要,教师可通过语音、视线、手势等方式将虚拟对象移动到教学空间的墙壁、地板、桌椅上或空中某处,追踪其在教学场景的6方向自由度(6DoF)变化,获取虚拟对象新的位置、姿态和缩放参数。
(2-1-3)自适应设置。在增强教学场景中,虚拟对象遵循与真实世界相似的物理规则, 如近大远小的透视效果,放置或移动虚拟对象时,通过求交、碰撞检测等步骤,使用控制函数:
Figure PCTCN2021131211-appb-000004
u(k)∈R,y(k)∈R分别表示k时刻数据的输入和输出,λ是权重因子,ρ是步长因子,用来限制控制虚拟对象输入的变化量,时变参数φ c(k)∈R,y *(k+1)为期望输出结果,实时自适应地调整位置、姿态和缩放参数,实现增强教学场景的虚实融合显示。
(2-2)真实显示效果生成。通过收集教学场景中采样点的光照强度,运用双线性内插算法计算邻近采样点的光照强度,并将结果作用于虚拟对象,实现虚实融合的光照效果;采用ShadowMap技术在教学空间中实时生成虚拟对象的逼真阴影效果。
(2-2-1)光影效果生成。通过在教学场景中设置,收集周围环境的光照信息,对空间位置x与入射光线w i的反射情况,使用双向反射分布函数
Figure PCTCN2021131211-appb-000005
计算不同方向入射光和反射光w的关系,利用反射模型:
I=k aI a+k d(n·l)I d+k s(r·v) αI s
其中a是环境光,d是反射光,s是高光,k是反射系数或材质颜色,I是光的颜色或亮度,α是对象表面粗糙程度。
根据入射光和反射光关系变化控制亮光区域范围和锐利程度,按距离大小计算增加的光照值,利用场景中采样点的间接光效果照亮虚拟对象,由于目标图像的坐标设置为单精度浮点数存在非整数的重映射,通过边长比对应源图像,运用双线性内插算法计算邻近采样点光照强度值,将其结果作用于虚拟对象,实现增强虚拟教学场景中光照融合效果,令场景更加真实,更具有立体感。
(2-2-2)阴影生成。根据教学空间中光源类型、数量、位置等参数,在光源位置添加深度虚拟相机并设定其视锥范围,从光源视角渲染整个教学场景,获得场景的阴影效果图,根据各对象外接包围盒顶点坐标,确定包围盒落在虚拟对象阴影投射范围中的场景对象,从深度缓冲区遍历并拷贝纹理阴影数据,在特征平面上生成关联的深度纹理阴影(如图7)。
(2-2-3)阴影动态变化。随着虚拟对象在教学空间中位置、姿态、缩放比例的变化,利用光照模型原理、Shadow Map算法实时更新其在教学环境中的阴影投射区域,使用:
反三角函数公式:tanα(坡度)=高程差/水平距离
计算阴影坡度尺度,依据深度偏移基准的设置,消除阴影锯齿效果,结合光源检测算法:
L(x,w)=L e(x,w)+L r(x,w)
Figure PCTCN2021131211-appb-000006
获取光源的方向和强度,对比周围环境与虚拟对象表面亮度,调整虚拟对象在直接光照与间接光照叠加效果下动态阴影变化,L e(x,w)表示直接光照辐照率,L r(x,w)表示间接光照辐照度,L(x,w)是空间位置x上沿着方向w的辐照率。
图8中,截取一块产生阴影偏移的平面,a、b、c、d四个像素点到光源距离与中心点到光源的距离比较,确定各部分像素点的明暗。
(2-3)遮挡处理。判断教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制,按照深度缓冲区的值完成对象排序;采用基于光流法的最大流/最小割的轮廓跟踪方法,实时追踪各对象的轮廓,判断它们的遮挡关系;通过平移、拉伸、旋转简单平面,遮挡空间中复杂区域的3D网格,简化各对象遮挡关系的判别。
(2-3-1)场景对象的深度排序。根据教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制获取个对象的前景实物轮廓,判断其离摄像机的远近、位置关系,通过深度值梯度分配方法,完成场景各对象的深度信息排序,不断校准它们在深度缓冲区的值,执行各对象的实时深度排序。
(2-3-2)虚实遮挡关系的判断。利用八叉树结构判断增强教学场景中各对象的空间位置关系,比较场景对象中前景物体和虚拟物体的深度排序值确定遮挡关系,首先利用光流法跟踪上一帧图像中目标区域内的特征点,包围盒的边e∈E连接两个相邻的特征点,为每条边设置一个非负的权值w e,运用表达式:
Figure PCTCN2021131211-appb-000007
做最大流/最小割的轮廓跟踪,然后根据特征点的位移,将上一帧的轮廓平移,获得当前目标物体的近似轮廓,再以近似轮廓为中心的带状区域求得目标的精确轮廓,快速收敛到目标边界,采用基于光流法的最大流/最小割的轮廓跟踪方法,利用深度图像梯度计算公式:
Figure PCTCN2021131211-appb-000008
p(x,y)表示空间某一对象位置,从教师视角实时、精确地追踪各对象的轮廓,比较场景前景对象与虚拟对象的深度值,确定如图9所示虚实对象之间的遮挡关系和范围。
(2-3-3)遮挡平面添加。针对教学空间中难以识别的白色墙壁区域、光照复杂或不可穿越的区域等情形(如墙壁、转角、窗台等),创建一些隐藏显示的简易平面(如四边形),通过平移、旋转、拉伸操作,遮挡教学空间中复杂区域的3D网格(如图10),简化真实空间各对象遮挡关系的判别。
(3)动态交互真实效果生成。通过视线靶点、虚拟手的交互提示设置,引导教师使用多模态算法完成与虚拟对象的实时交互;在多终端实现交互结果的同步定位、映射和动态呈现;构建不同对象的碰撞体,根据碰撞情况,执行相应操作,设计自定义Shader,优化交互渲染流程。
(3-1)虚拟对象的交互。采用多模态交互算法,支持教师多种交互方式操纵虚拟对象;设置交互提示的体感效果,质量越大,虚拟手的体感偏移等级越小;通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合。
(3-1-1)多模态交互方式。构建全息成像环境中视觉、听觉、触觉等多模态交互融合算法,获取虚拟对象的包围盒,支持教师通过手势、视线、头部等交互操作在增强教学场景中推、拉、摇、移虚拟对象,增强教学过程中交互操作的真实性。
(3-1-2)交互提示体感效果设置。根据虚拟对象的性质,估算其体积、密度和质量,基于物理重力规则对不同的物理负荷水平,划分虚拟交互提示的体感偏移等级,虚拟对象的质量越大,虚拟手的体感偏移等级越小,越不产生偏移感的错觉,增强教师的真实感体验。
(3-1-3)交互引导。建立增强教学场景表征的空间认知,通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合,根据虚拟对象的移动状态、当前位置坐标、场景呈现角度,匹配师生视角渲染教学场景,提升师生自我定位、主观行为感觉,增强其从真实教学场景向虚拟环境转换的自然过渡,形成相匹配的空间情境模型,增进教师的知觉体验。
(3-2)实时交互。获取教师移动虚拟对象的变化矩阵,在不同终端定位、更新其变化;利用SLAM技术,将变换后的虚拟对象映射到不同终端的本地化教学环境,同步映射交互结果;更新光影、阴影等效果,实现教学环境中虚拟对象的真实感体验。
(3-2-1)虚拟对象的同步定位。在虚拟手、视线靶点等提示引导下,计算教师视线与虚 拟对象表面法向量的夹角,根据教学活动的需要,教师会在增强教学场景中点击、移动、旋转、缩放虚拟对象,计算移动前后的位置、姿态和比例尺寸的变换矩阵,用于在不同终端定位、更新虚拟对象的信息变化。
(3-2-2)交互结果的同步映射。针对师生用户共享增强教学场景的需求,利用视觉SLAM技术提取与匹配各对象特征点,反演位置与姿态变化,获取精准的位姿估计,同步多用户不同设备的教学场景数据,获取环境、设备的一致映射以及各组关键帧的相对位置映射,将虚拟对象的变换参数映射到学生终端的本地化教学环境,实现不同终端的一致映射以及与其它场景对象的相对位置映射。
(3-2-3)交互结果动态呈现。教师运用多模态交互方式操纵虚拟对象,在增强教学环境使用全息成像系统呈现其新位置、姿态和缩放比例,根据与光源的相对关系,更新光影、阴影效果,令教学环境中虚拟对象产生真实感体验效果。
(3-3)交互优化。依据各对象表面网格模型的形状,构建不同的碰撞体;采用扫描线算法计算虚拟对象的下一位置,判断会否与其它对象发生碰撞,执行相应操作;设计自定义Shader,采用片段着色器渲染纹理像素,重构顶点渲染流程。
(3-3-1)碰撞检测。依据增强教学场景中各对象表面网格模型的形状,构建不同的碰撞体(如规则的长方体形状,则直接使用外接长方盒;如图11中的不规则表面,则采用分段表示的长方体盒以贴近对象表面),采用Opcode方法,通过检测目标的采样检测对p(A i,B j),其初始位置X的标记矩阵:
Figure PCTCN2021131211-appb-000009
X Ai与X Bi表示碰撞体点位坐标,利用三维空间中两个目标特征之间的欧式距离f(p(A i,B j))当做判断依据,快速检测虚拟对象与其它对象的碰撞。
(3-3-2)避障处理。教师使用手势、视线等交互方式移动、旋转和缩放增强教学场景中的虚拟对象,采用扫描线算法求出各边方程:
Figure PCTCN2021131211-appb-000010
设常量Δx表步长关系,计算扫描线与虚拟对象交点,对其交点按照距离远近从小到大排序,判断与其它对象会否发生碰撞,规划出不与其他对象碰撞且满足虚拟对象运动学约束的运动路径,如发生碰撞,则停止移动或利用协调控制运动方程:
Figure PCTCN2021131211-appb-000011
ψ ω当前位置到障碍物的方位角,v,ψ,Φ,g分别为移动速度、偏转角、视转角和重力加速度,ρ是相机到障碍物距离,执行避障操作。
(3-3-3)交互渲染优化。增强教学场景的交互过程中,综合考虑渲染管道中带宽、缓存行为和滤波指标,设计自定义Shader,优先使用满足视觉效果要求的片段着色器,移除材质中Secondary Maps等可以省略的属性,并降低顶点着色器代码复杂度;使用纹理压缩的方式减少教室空间各对象的纹理大小,从而优化现存带宽;重构顶点渲染流程,满足光影、阴影、动画等实时动态更新要求。
本说明书中未作详细描述的内容,属于本专业技术人员公知的现有技术。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (3)

  1. 一种教学场景中虚拟对象的真实感生成方法,其特征在于该方法包括以下步骤:
    (1)教学空间感知:制定教学环境深度数据采集规范,从多轨迹、多角度采集教学空间的深度数据;利用语义分割算法提取、生成各对象的3D模型,构建场景的八叉树索引结构,实时感知视场范围内场景对象的变化;利用启发式算法、聚类分析方法提取对象的特征点和线,采用空间定位和即时成图技术,优化理解教学场景和模型;
    (2)真实感虚拟对象生成;教师使用多种交互方式放置、移动虚拟对象,自适应显示其位置、姿态和尺寸;通过采集教学场景中光照强度,实现虚实融合的光照效果,采用ShadowMap实时生成虚拟对象的阴影效果;基于Raycasting的渲染机制,判别教学场景中各对象的位置与遮挡关系,使用蒙版平面遮挡复杂区域,简化各对象遮挡关系的判别;具体包括如下步骤:
    (2-1)虚拟对象逼真显示,教师使用语音、手势交互方式在教学空间中放置、移动虚拟对象,运用感知摄像头追踪其位置、姿态和缩放比例的变化,通过求交、碰撞检测步骤,实时自适应调整位置、姿态和缩放参数,实现增强教学场景中虚拟对象的逼真显示;
    (2-1-1)虚拟对象的放置,基于对真实教学环境的理解,结合教师的视线焦点与方向,使用语音、手势交互方式在教学空间中选择虚拟对象的定位点,综合考虑其所受物理规则的限制,以适当的姿态、缩放比例放置到教学空间的相应位置;
    (2-1-2)虚拟对象的移动,根据教学任务的需要,教师通过语音、视线、手势方式将虚拟对象移动到教学空间的墙壁、地板、桌椅上或空中某处,通过感知摄像头追踪其在教学环境中的6DoF变化,获取新的位置、姿态和缩放参数;
    (2-1-3)自适应设置,在增强教学场景中,虚拟对象遵循与真实世界相似的物理规则,放置或移动虚拟对象时,通过求交、碰撞检测步骤,实时自适应地调整位置、姿态和缩放参数,实现增强教学场景的虚实融合显示;
    (2-2)真实显示效果生成,通过收集教学场景中采样点的光照强度,运用双线性内插算法计算邻近点的光照强度,并将结果作用于虚拟对象,实现虚实融合的光照效果;采用ShadowMap技术在教学空间中实时生成虚拟对象的逼真阴影效果;
    (2-2-1)光影效果生成,通过在教学场景中设置采样点,收集周围环境的光照信息,运用双线性内插算法计算邻近点的光照强度,并将插值结果作用于虚拟对象,实现增强教学场景中光照融合效果,令场景更加真实,更具有立体感;
    (2-2-2)阴影生成,根据教学空间中光源类型、数量、位置参数,在光源位置添加深度虚拟相机,确定包围盒落在虚拟对象阴影投射范围中的场景对象,利用ShadowMap技术, 创建这些对象表面模型的深度纹理阴影;
    (2-2-3)阴影动态变化,随着虚拟对象在教学空间中位置、姿态、缩放比例的变化,实时更新其在教学环境中的阴影投射区域,计算阴影坡度比例,依据深度偏移基准的设置,消除阴影锯齿效果,逼真地表现实时动态阴影效果;
    (2-3)遮挡处理,判断教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制,按照深度缓冲区的值完成对象排序;采用基于光流法的最大流或最小割跟踪方法,实时追踪各对象的轮廓,判断它们的遮挡关系;通过平移、拉伸、旋转简单平面,遮挡空间中复杂区域的3D网格,简化各对象遮挡关系的判别;
    (2-3-1)场景对象的深度排序,根据教师与增强教学场景中各对象的位置关系,基于Raycasting的渲染机制,判断它们离摄像机的远近、位置关系,不断校准它们在深度缓冲区的值,执行各对象的实时深度排序;
    (2-3-2)虚实遮挡关系的判断,利用八叉树结构判断增强教学场景中各对象的空间位置关系,采用基于光流法的最大流或最小割跟踪方法,从教师视角实时、精确地追踪各对象的轮廓,确定虚实对象之间的遮挡关系和范围;
    (2-3-3)遮挡平面添加,针对教学空间中难以识别的白色墙壁区域、光照复杂或不可穿越的区域情形,创建一些隐藏显示的简单平面,通过平移、旋转、拉伸操作,遮挡教学空间中这些复杂区域的3D网格结构,简化真实空间各对象遮挡关系的判别;
    (3)动态交互真实效果生成;通过视线靶点、虚拟手的交互提示设置,引导教师使用多模态算法完成与虚拟对象的实时交互;在多终端实现交互结果的同步定位、映射和动态呈现;构建不同对象的碰撞体,根据碰撞情况,执行相应操作,设计自定义Shader,优化交互渲染流程。
  2. 根据权利要求1所述的教学场景中虚拟对象的真实感生成方法,其特征在于步骤(1)教学空间感知具体包括以下步骤:
    (1-1)教学环境深度数据采集;制定深度传感器的采集规范,包括采集路线、移动速度;根据采集规范要求,从多轨迹、多角度采集教学空间各对象的深度数据;使用右手坐标系描述深度合成图中各对象的位置和姿态;
    (1-1-1)制定教学空间深度数据采集规范,针对面积、长宽比例不同的教学空间,制定主动测距深度传感器的采集路线、移动速度,从多轨迹、多角度采集教学空间各对象的深度数据;
    (1-1-2)教学空间深度信息数据采集,将TOF传感器佩戴在教师头部,根据采集规范的 要求,扫描教学空间,快速采集墙壁、桌椅、黑板、讲台的深度数据,使用单精度浮点数记录各深度点的坐标数值,单位为米;
    (1-1-3)位置与姿态的描述,通过迭代对齐算法,精确计算多站点采集的教学空间及其对象的深度图,将它们拼接到统一坐标系中,生成深度合成图,采用右手坐标系描述教学环境中各对象的位置坐标(x,y,z)和朝向姿态(tx,ty,tz);
    (1-2)教学空间感知,根据深度合成图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各对象的3D模型;利用八叉树结构分割教学空间场景,构建场景的索引结构,实现对象之间快速求交、碰撞处理;跟踪教师头部运动和视线方向的变化,实时感知视场范围内场景对象的参数变化;
    (1-2-1)对象模型分割,根据合成后的深度图,构建教学空间的表面网格模型,运用语义分割算法提取、生成各对象的3D模型;根据对象的长宽高、空间位置、朝向、姿态特征信息,创建长方体包围盒,利用YOLO算法快速定位其具体位置;
    (1-2-2)场景组织,利用八叉树结构分割教学空间场景,构建场景的索引结构,基于各对象包围盒的坐标信息,分割和预处理教学场景中各对象;根据包围盒的位置关系,实现各对象之间快速求交、碰撞处理;
    (1-2-3)教学场景感知,结合加速度传感器、陀螺仪和深度感知摄像头,跟踪教师头部运动和视线方向的变化,实时感知新视场范围内的场景对象,确定对象的位置、姿态、尺寸,以及相对于初始状态变换矩阵的变化;
    (1-3)环境理解,利用启发式算法提取教学环境中各对象的特征点,设置为空间锚点,优化理解教学空间的场景和模型;分析各对象模型的表面几何特征,采用聚类分析方法,提取其特征平面;运用空间定位和即时成图技术,实时获取教学场景中可见对象的3D表面模型;
    (1-3-1)特征点理解,利用启发式算法提取教学环境中各对象的特征点,将它们设置为空间锚点,以锚点为圆心,3米之内的模型不随视场宽高比的变化而变形,通过锚点优化理解教学空间的场景和模型;
    (1-3-2)特征平面理解,分析模型的表面几何特征,采用聚类分析方法,提取各对象模型的特征平面,根据教师位置和视线方向的变化,实时获取场景中的可见特征平面,增进对教学空间的理解;
    (1-3-3)特征对象理解,运用空间定位和即时成图技术,定位场景空间中可见对象的坐标和姿态,根据教师位置和视线方向的变化,实时获取教学环境中可见对象的3D表面模 型,剔除不可见对象,提高对教学环境理解的处理速度。
  3. 根据权利要求1所述的教学场景中虚拟对象的真实感生成方法,其特征在于步骤(3)动态交互真实效果生成具体包括如下步骤:
    (3-1)虚拟对象的交互,采用多模态交互算法,支持教师多种交互方式操纵虚拟对象;设置交互提示的体感效果,质量越大,虚拟手的体感偏移等级越小;通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合;
    (3-1-1)多模态交互方式,构建全息成像环境中视觉、听觉、触觉多模态交互融合算法,支持教师通过手势、视线、头部交互操作在增强教学场景中推、拉、摇、移虚拟对象,增强教学过程中交互操作的真实性;
    (3-1-2)交互提示体感效果设置,根据虚拟对象的性质,估算其体积、密度和质量,基于物理重力规则,设置交互提示的体感效果:质量越大,虚拟手的体感偏移等级越小,越不产生偏移感的错觉,增强教师的真实感体验;
    (3-1-3)交互引导,在增强教学场景中通过视线靶点、虚拟手的交互提示,引导教师将感知到的空间线索与认知结构相结合,增强其从真实教学场景向虚拟环境转换的自然过渡,形成相匹配的空间情境模型,增进教师的知觉体验;
    (3-2)实时交互,获取教师移动虚拟对象的变化矩阵,在不同终端定位、更新其变化;利用SLAM技术,将变换后的虚拟对象映射到不同终端的本地化教学环境,同步映射交互结果;更新光影、阴影效果,实现教学环境中虚拟对象的真实感体验;
    (3-2-1)虚拟对象的同步定位,在虚拟手、视线靶点提示引导下,根据教学活动的需要,教师会在增强教学场景中移动虚拟对象,计算移动前后的位置、姿态和比例尺寸的变换矩阵,用于在不同终端定位、更新虚拟对象的新变化;
    (3-2-2)交互结果的同步映射,针对师生用户共享增强教学场景的需求,利用SLAM技术,将虚拟对象的变换参数映射到学生终端的本地化教学环境,实现在不同终端的一致映射以及与其它场景对象的相对位置映射;
    (3-2-3)交互结果动态呈现,教师运用多模态交互方式操纵虚拟对象,在增强教学环境使用全息成像系统呈现其新位置、姿态和缩放比例,根据与光源的相对关系,更新光影、阴影效果,令教学环境中虚拟对象产生真实感体验效果;
    (3-3)交互优化,依据各对象表面网格模型的形状,构建不同的碰撞体;采用扫描线算法计算虚拟对象的下一位置,判断会否与其它对象发生碰撞,执行相应操作;设计自定义Shader,采用片段着色器渲染纹理像素,重构顶点渲染流程;
    (3-3-1)碰撞检测,依据增强教学场景中各对象表面网格模型的形状,构建不同的碰撞体,采用Opcode方法快速检测虚拟对象与其它对象的碰撞;
    (3-3-2)避障处理,教师使用手势、视线交互方式移动、旋转和缩放增强教学场景中的虚拟对象,采用扫描线算法计算其下一位置、姿态和比例,判断与其它对象会否发生碰撞,如发生,则停止移动或执行规避障碍操作;
    (3-3-3)交互渲染优化,增强教学场景的交互过程中,综合考虑渲染管道中带宽、缓存行为和滤波指标,设计自定义Shader,采用片段着色器渲染纹理像素,重构顶点渲染流程,满足光影、阴影、动画实时动态更新要求。
PCT/CN2021/131211 2020-12-11 2021-11-17 一种教学场景中虚拟对象的真实感生成方法 WO2022121645A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011458753.9A CN112509151B (zh) 2020-12-11 2020-12-11 一种教学场景中虚拟对象的真实感生成方法
CN202011458753.9 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022121645A1 true WO2022121645A1 (zh) 2022-06-16

Family

ID=74973693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131211 WO2022121645A1 (zh) 2020-12-11 2021-11-17 一种教学场景中虚拟对象的真实感生成方法

Country Status (3)

Country Link
US (1) US11282404B1 (zh)
CN (1) CN112509151B (zh)
WO (1) WO2022121645A1 (zh)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112509151B (zh) * 2020-12-11 2021-08-24 华中师范大学 一种教学场景中虚拟对象的真实感生成方法
KR102594258B1 (ko) * 2021-04-26 2023-10-26 한국전자통신연구원 증강현실에서 실제 객체를 가상으로 이동하는 방법 및 장치
CN113434035B (zh) * 2021-05-19 2022-03-29 华中师范大学 一种vr全景图像素材的教学重用方法
CN113253846B (zh) * 2021-06-02 2024-04-12 樊天放 一种基于目光偏转趋势的hid交互系统及方法
CN113487082B (zh) * 2021-07-06 2022-06-10 华中师范大学 一种虚拟实验教学资源的注记复杂度度量和优化配置方法
CN113504890A (zh) * 2021-07-14 2021-10-15 炬佑智能科技(苏州)有限公司 基于ToF相机的扬声器组件的控制方法、装置、设备和介质
KR20230040708A (ko) * 2021-09-16 2023-03-23 현대자동차주식회사 행위 인식 장치 및 방법
US11417069B1 (en) * 2021-10-05 2022-08-16 Awe Company Limited Object and camera localization system and localization method for mapping of the real world
CN113672097B (zh) * 2021-10-22 2022-01-14 华中师范大学 一种立体综合教学场中教师手部感知交互方法
CN114022644B (zh) * 2021-11-05 2022-07-12 华中师范大学 一种教学空间中多虚拟化身的选位方法
CN114237389B (zh) * 2021-12-06 2022-12-09 华中师范大学 一种基于全息成像的增强教学环境中临场感生成方法
US20230186434A1 (en) * 2021-12-09 2023-06-15 Unity Technologies Sf Defocus operations for a virtual display with focus and defocus determined based on camera settings
CN114580575A (zh) * 2022-04-29 2022-06-03 中智行(苏州)科技有限公司 一种自动驾驶视觉感知的可持续闭环链路的构建方法
CN115035278B (zh) * 2022-06-06 2023-06-27 北京新唐思创教育科技有限公司 基于虚拟形象的教学方法、装置、设备及存储介质
US11776206B1 (en) 2022-12-23 2023-10-03 Awe Company Limited Extended reality system and extended reality method with two-way digital interactive digital twins
CN117055724A (zh) * 2023-05-08 2023-11-14 华中师范大学 虚拟教学场景中生成式教学资源系统及其工作方法
CN116416402A (zh) * 2023-06-07 2023-07-11 航天宏图信息技术股份有限公司 一种基于mr协同数字沙盘的数据展示方法和系统
CN116630583A (zh) * 2023-07-24 2023-08-22 北京亮亮视野科技有限公司 虚拟信息的生成方法、装置、电子设备及存储介质
CN117173365A (zh) * 2023-08-07 2023-12-05 华中师范大学 基于声音ai模型的虚拟场景生成方法及系统
CN116860113B (zh) * 2023-08-16 2024-03-22 深圳职业技术大学 一种xr组合场景体验生成方法、系统及存储介质
CN116825293B (zh) * 2023-08-25 2023-11-07 青岛市胶州中心医院 一种可视化产科影像检查处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705353A (zh) * 2017-11-06 2018-02-16 太平洋未来科技(深圳)有限公司 应用于增强现实的虚拟对象光影效果的渲染方法和装置
US20190385373A1 (en) * 2018-06-15 2019-12-19 Google Llc Smart-home device placement and installation using augmented-reality visualizations
CN111009158A (zh) * 2019-12-18 2020-04-14 华中师范大学 一种面向野外实践教学的虚拟学习环境多通道融合展示方法
CN112509151A (zh) * 2020-12-11 2021-03-16 华中师范大学 一种教学场景中虚拟对象的真实感生成方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2120664C1 (ru) * 1997-05-06 1998-10-20 Нурахмед Нурисламович Латыпов Система для погружения пользователя в виртуальную реальность
US20070248261A1 (en) * 2005-12-31 2007-10-25 Bracco Imaging, S.P.A. Systems and methods for collaborative interactive visualization of 3D data sets over a network ("DextroNet")
US20150123966A1 (en) * 2013-10-03 2015-05-07 Compedia - Software And Hardware Development Limited Interactive augmented virtual reality and perceptual computing platform
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
CN104504671B (zh) * 2014-12-12 2017-04-19 浙江大学 一种用于立体显示的虚实融合图像生成方法
DE102016113060A1 (de) * 2016-07-15 2018-01-18 Beckhoff Automation Gmbh Verfahren zum Steuern eines Objekts
EP3551303A4 (en) * 2016-12-08 2020-07-29 Digital Pulse Pty. Limited COLLABORATIVE LEARNING SYSTEM AND PROCESS USING VIRTUAL REALITY
CN106951079A (zh) * 2017-03-17 2017-07-14 北京康邦科技有限公司 一种自适应课程控制方法和系统
US10387485B2 (en) * 2017-03-21 2019-08-20 International Business Machines Corporation Cognitive image search refinement
CN107479706B (zh) * 2017-08-14 2020-06-16 中国电子科技集团公司第二十八研究所 一种基于HoloLens的战场态势信息构建与交互实现方法
US11442591B2 (en) * 2018-04-09 2022-09-13 Lockheed Martin Corporation System, method, computer readable medium, and viewer-interface for prioritized selection of mutually occluding objects in a virtual environment
CA3105067A1 (en) * 2018-06-27 2020-01-02 Colorado State University Research Foundation Methods and apparatus for efficiently rendering, managing, recording, and replaying interactive, multiuser, virtual reality experiences
WO2021061821A1 (en) * 2019-09-27 2021-04-01 Magic Leap, Inc. Individual viewing in a shared space
US11380069B2 (en) * 2019-10-30 2022-07-05 Purdue Research Foundation System and method for generating asynchronous augmented reality instructions
CN111525552B (zh) * 2020-04-22 2023-06-09 大连理工大学 一种基于特征信息的三阶段短期风电场群功率预测方法
CN111428726B (zh) * 2020-06-10 2020-09-11 中山大学 基于图神经网络的全景分割方法、系统、设备及存储介质
US11887365B2 (en) * 2020-06-17 2024-01-30 Delta Electronics, Inc. Method for producing and replaying courses based on virtual reality and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705353A (zh) * 2017-11-06 2018-02-16 太平洋未来科技(深圳)有限公司 应用于增强现实的虚拟对象光影效果的渲染方法和装置
US20190385373A1 (en) * 2018-06-15 2019-12-19 Google Llc Smart-home device placement and installation using augmented-reality visualizations
CN111009158A (zh) * 2019-12-18 2020-04-14 华中师范大学 一种面向野外实践教学的虚拟学习环境多通道融合展示方法
CN112509151A (zh) * 2020-12-11 2021-03-16 华中师范大学 一种教学场景中虚拟对象的真实感生成方法

Also Published As

Publication number Publication date
US11282404B1 (en) 2022-03-22
CN112509151B (zh) 2021-08-24
CN112509151A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
WO2022121645A1 (zh) 一种教学场景中虚拟对象的真实感生成方法
US11461958B2 (en) Scene data obtaining method and model training method, apparatus and computer readable storage medium using the same
US20200380769A1 (en) Image processing method and apparatus, storage medium, and computer device
CN102096941B (zh) 虚实融合环境下的光照一致性方法
CN100407798C (zh) 三维几何建模系统和方法
CN108648269A (zh) 三维建筑物模型的单体化方法和系统
CN110717494B (zh) Android移动端室内场景三维重建及语义分割方法
Lu et al. Illustrative interactive stipple rendering
US20140098093A2 (en) Method for the Real-Time-Capable, Computer-Assisted Analysis of an Image Sequence Containing a Variable Pose
CN111292408B (zh) 一种基于注意力机制的阴影生成方法
CN103489216A (zh) 使用摄像机和电视监视器的三维物体扫描
Mudge et al. Viewpoint quality and scene understanding
WO2020134925A1 (zh) 人脸图像的光照检测方法、装置、设备和存储介质
Piumsomboon et al. Physically-based interaction for tabletop augmented reality using a depth-sensing camera for environment mapping
US20180286130A1 (en) Graphical image augmentation of physical objects
CN116097316A (zh) 用于非模态中心预测的对象识别神经网络
Sandnes Sketching 3D immersed experiences rapidly by hand through 2D cross sections
CN111145341A (zh) 一种基于单光源的虚实融合光照一致性绘制方法
Tian et al. Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems
Deepu et al. 3D Reconstruction from Single 2D Image
Zhang et al. A multiple camera system with real-time volume reconstruction for articulated skeleton pose tracking
Villa-Uriol et al. Automatic creation of three-dimensional avatars
Zhang et al. A smart method for developing game-based virtual laboratories
Dutreve et al. Easy rigging of face by automatic registration and transfer of skinning parameters
Eichelbaum et al. PointAO-Improved Ambient Occlusion for Point-based Visualization.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902352

Country of ref document: EP

Kind code of ref document: A1