WO2017088361A1 - 一种基于虚拟现实设备的视锥体裁剪方法及装置 - Google Patents

一种基于虚拟现实设备的视锥体裁剪方法及装置 Download PDF

Info

Publication number
WO2017088361A1
WO2017088361A1 PCT/CN2016/082511 CN2016082511W WO2017088361A1 WO 2017088361 A1 WO2017088361 A1 WO 2017088361A1 CN 2016082511 W CN2016082511 W CN 2016082511W WO 2017088361 A1 WO2017088361 A1 WO 2017088361A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
field
rendering
scene
virtual reality
Prior art date
Application number
PCT/CN2016/082511
Other languages
English (en)
French (fr)
Inventor
胡雪莲
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/242,522 priority Critical patent/US20170154460A1/en
Publication of WO2017088361A1 publication Critical patent/WO2017088361A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present invention relates to computer graphics technology, and in particular, to a method, a device, a display method, and a virtual reality device for a cone cutting method based on a virtual reality device.
  • VR Virtual Reality Technology
  • a view cone is a visible cone range of a camera in a scene. Due to the perspective transformation, the view cone of the computer application is a quadrilateral observation pyramid, which is composed of six faces of the upper bottom, the lower bottom, the left, the right, the front and the back.
  • the scene inside the frustum is visible, otherwise it is not visible.
  • objects outside the view cone are invisible, so the invisible scene can be culled before display, and the scene rendering is not affected.
  • Frustum clipping is the removal of these invisible scene data before the vertex data is sent to the rendering pipeline.
  • the view angle of the left and right eyes calculated by the movement of the head is respectively used to cut the view of the 3D scene to realize the view of the virtual reality device. Cone cutting.
  • the embodiment of the present invention provides a method, a device, a display method, and a virtual reality device for reducing a cone according to a virtual reality device, so as to solve the problem of performing a cropping process in a process of cropping a 3D scene in the prior art.
  • the delay problem brings the clipping of the VR 3D scene quickly and easily.
  • Embodiments of the present invention provide a method for cropping a view based on a virtual reality device, including:
  • the geometry in the 3D scene currently to be rendered is cropped according to the view cone.
  • An embodiment of the present invention provides a display method based on a virtual reality device, including:
  • An embodiment of the present invention provides a viewfinder device for a viewfinder based on a virtual reality device, including:
  • a determining module for determining a first field of view of the left eye of the human body and a second field of view of the right eye
  • An acquiring module configured to acquire a union area of the first field of view angle and a second field of view angle, and use the union area as a view cone of a human body;
  • a processing module configured to crop the geometry in the 3D scene currently to be rendered according to the view cone.
  • An embodiment of the present invention provides a virtual reality device, where the virtual reality device includes an acquiring unit, a rendering unit, a display unit, and a virtual reality device-based view cone cutting device as described above;
  • the acquiring unit is configured to acquire a geometry in a 3D scene to be rendered after being cropped by the virtual reality device-based view cone cropping device;
  • the rendering unit is configured to perform rendering and rendering on the clipped 3D scene to be rendered by the acquiring unit;
  • the display unit is configured to display the 3D scene to be rendered after the rendering unit renders the rendering Medium geometry.
  • An embodiment of the present invention provides a virtual reality device, including:
  • the processor, the memory, and the communication interface complete communication with each other through the bus;
  • the communication interface is used for information transmission between the virtual reality device and the server;
  • the processor is configured to invoke logic instructions in the memory to perform the following method
  • An embodiment of the present invention provides a computer program, including program code, where the program code is used to perform the following operations:
  • Embodiments of the present invention provide a storage medium for storing a computer program as described above.
  • the virtual reality device-based viewfinder clipping method, device, display method and virtual reality device provided by the embodiment of the present invention, by using the union of the left and right eyes of the human body as the real view cone of the human body, according to the The real view cone is used to cut the cone, which greatly reduces the amount of data drawn by the geometry, thereby reducing the amount of calculation during the cutting of the cone, improving the rendering efficiency and reducing the rendering delay caused by the traditional cone clipping.
  • FIG. 1 is a flow chart of an embodiment of a viewfinder clipping method based on a virtual reality device according to the present invention
  • FIG. 2 is a flowchart of an embodiment of a display method based on a virtual reality device according to the present invention
  • FIG. 3 is a schematic structural diagram of an embodiment of a viewfinder clipping device based on a virtual reality device according to the present invention
  • FIG. 4 is a schematic structural diagram of an embodiment of a virtual reality device according to the present invention.
  • FIG. 5 is a schematic diagram of an entity structure of a virtual reality device according to the present invention.
  • FIG. 1 is a flow chart showing a method for cropping a viewfinder based on a virtual reality device according to an embodiment of the present invention.
  • a method for cropping a view based on a virtual reality device includes the following steps:
  • the angles of view of the left and right eyes of the human body are different.
  • the second field of view of the eye is necessary to obtain the first field of view and the right angle of the left eye of the human body in advance.
  • the virtual reality device in this embodiment is a smart device having a virtual reality function, such as a VR helmet and VR glasses, and the present invention does not specifically limit this.
  • the two field of view angles of the left and right eyes are summed, and the obtained union area is the left and right eyes.
  • the merged portion of the visible area, therefore, the resulting union area can be used as the true cone of the human body.
  • the geometry in the 3D scene to be presented by the virtual reality device is cropped according to the obtained true cone of the human body, which solves the need for the cone clipping process of the 3D scene in the prior art.
  • the delay cones of the left and right eyes the delay problem caused by the two cropping calculations is performed, and the clipping of the VR 3D scene is realized quickly and conveniently.
  • the cone is cut according to the real cone, the amount of geometric data is greatly reduced, and the cone is reduced.
  • the amount of computation in the genre cut improves rendering efficiency and reduces rendering delays caused by traditional frustum clipping.
  • first angle of view of the left eye of the human body and the second field of view of the right eye are determined in the step S11, and specifically include the steps not shown in the following figure:
  • the acquiring spatial state information of the human head includes:
  • the spatial state information of the human head is determined based on the somatosensory data.
  • the spatial state information of the human head in the embodiment specifically includes the orientation information, the speed information, and the position information of the current human head movement.
  • the system setting parameters of the virtual reality device include: distance between the left and right eyeglasses of the virtual reality device, the distance between the left and right eyeglasses from the screen, the size and specifications of the virtual reality device and the left and right eyeglasses, and the like.
  • orientation information corresponding to the human head may include: displacement of the head in three dimensions in the space, that is, including front and rear displacement, up and down displacement, left and right displacement, or a combination of these displacements.
  • the somatosensory device in this embodiment includes a compass, a gyroscope, a wireless signal module, and at least one sensor for detecting body feeling data of the human head.
  • the sensor comprises an acceleration sensor, a direction sensor, a magnetic sensor, a gravity sensor, a rotation vector sensor, and a line One or more of the acceleration sensors.
  • S112. Determine a first field of view of the left eye of the human body and a second field of view of the right eye according to the system setting parameter and the spatial state information of the human head.
  • the orientation information, the speed information and the position information of the movement of the human head, the first angle of view of the left eye of the human body and the second field of view of the right eye of the human body are determined according to the system setting parameters of the virtual reality device.
  • step S13 the geometry in the 3D scene is cropped according to the view cone, and specifically includes the steps not shown in the following figure:
  • the cone coefficient is determined to determine the spatial plane equation corresponding to the six planes of the view cone.
  • the algorithm calculates the six faces of Viewing Frustum from the world, observation and projection matrix. It is fast, accurate, and allows us to quickly determine the Frustum planes in camera space, world space, or object space.
  • the world matrix and the view are both unitized matrices. This means that the camera is at the origin in the world coordinate system and is oriented in the positive direction of the Z axis.
  • the left cropping surface is obtained.
  • the approximate bounding body is obtained by various bounding body methods, and the six faces of the viewing cone are judged for each point on the bounding body, and there are the following three cases:
  • FIG. 2 is a flow chart showing a display method based on a virtual reality device according to an embodiment of the present invention.
  • a display method based on a virtual reality device specifically includes the following steps:
  • the geometry in the 3D scene to be presented is cut by the cone, and then the cropped to be rendered.
  • the geometry of the 3D scene is rendered and displayed, and the geometry in the 3D scene to be rendered after rendering is displayed to realize the display of the virtual reality device.
  • the display method based on the virtual reality device provided by the embodiment of the invention substantially reduces the amount of data of the drawing geometry by reducing the amount of data generated by the unified processing of the left and right eye cones, reduces the calculation amount, improves the rendering efficiency, and reduces the adoption. Rendering delay caused by traditional frustum clipping.
  • the embodiment of the present invention further provides a viewfinder clipping device based on a virtual reality device
  • FIG. 3 shows a viewfinder device based on a virtual reality device according to an embodiment of the present invention.
  • a virtual reality device-based view cone clipping device specifically includes a determining module 201, an obtaining module 202, and a processing module 203, wherein:
  • the determining module 201 is configured to determine a first field of view angle of a left eye of the human body and a second field of view angle of the right eye;
  • the angles of view of the left and right eyes of the human body are different.
  • the second field of view of the eye is necessary to obtain the first field of view and the right angle of the left eye of the human body in advance.
  • the virtual reality device in this embodiment is a smart device having a virtual reality function, such as a VR helmet and VR glasses, and the present invention does not specifically limit this.
  • the acquiring module 202 is configured to acquire a union area of the first field of view angle and a second field of view angle, and use the union area as a view cone of a human body;
  • the two field of view angles of the left and right eyes are summed together, and the obtained union area is the left and right eyes.
  • the merged portion of the visible area, therefore, the resulting union area can be used as the true cone of the human body.
  • the processing module 203 is configured to crop the geometry in the 3D scene to be presented according to the view cone.
  • the processing module cuts the geometry in the 3D scene currently to be presented by the virtual reality device according to the obtained true cone of the human body, and solves the process of cutting the cone in the 3D scene in the prior art. In the process, it is necessary to perform the delay problem caused by the two cropping calculations according to the right and left eye cones, and the VR 3D scene is cut quickly and conveniently.
  • the cone is cut according to the real cone, the amount of geometric data is greatly reduced, and the cone is reduced.
  • the amount of computation in the genre cut improves rendering efficiency and reduces rendering delays caused by traditional frustum clipping.
  • the determining module 201 includes an obtaining unit and a first determining unit, where:
  • the acquiring unit is configured to acquire spatial state information of a human head and system setting parameters of a current virtual reality device
  • the first determining unit is configured to determine a first field of view of the left eye of the human body and a second field of view of the right eye according to the system setting parameter and the spatial state information of the human head.
  • the obtaining unit further includes a receiving subunit and a determining subunit, wherein:
  • the receiving subunit is configured to receive the somatosensory data of the human head uploaded by the somatosensory device;
  • the determining subunit is configured to determine spatial state information of the human head according to the somatosensory data received by the receiving subunit.
  • the processing module 203 includes a second determining unit, a determining unit, a third determining unit, and a cropping unit, where:
  • a second determining unit configured to determine a spatial plane equation corresponding to the six planes of the view cone
  • a determining unit configured to determine, according to the spatial plane equation, a positional relationship between each point coordinate of the geometric body in the 3D scene and each plane;
  • a third determining unit configured to determine a cutting plane of the view cone according to the positional relationship
  • a cutting unit for cutting the frustum according to the cutting plane.
  • the embodiment of the present invention further provides a virtual reality device.
  • the virtual reality device specifically includes: an obtaining unit 10, a rendering unit 30, a display unit 40, and the method based on any of the foregoing embodiments.
  • a cone cutting device 20 of a virtual reality device wherein
  • the acquiring unit 10 is configured to acquire the geometry in the 3D scene to be rendered after the cropping device 20 based on the virtual reality device.
  • the rendering unit 30 is configured to perform rendering rendering on the clipped 3D scene in the 3D scene to be rendered by the acquiring unit.
  • the rendering unit 30 only draws the geometry that intersects with the real view cone during rendering, and renders the clipped geometry in the 3D scene, and performs anti-distortion and anti-dispersion processing after display.
  • the display unit 40 is configured to display the rendering unit 20 to render the geometry in the 3D scene to be rendered after being drawn.
  • the virtual reality device provided by the embodiment of the invention reduces the amount of data generated by the geometric processing by uniformly processing the left and right eye cones, thereby reducing the amount of calculation, improving the rendering efficiency, and reducing the use of the traditional view cone.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the virtual reality device-based viewfinder clipping method, device, display method, and virtual reality device used by the embodiments of the present invention to use the union of the left and right eye angles of the human body as the real vision of the human body. Cone, according to the real view cone to cut the cone, greatly reducing the amount of geometry data, thereby reducing the amount of calculation during the cut of the cone, improving rendering efficiency, reducing the rendering caused by traditional cone clipping delay.
  • FIG. 5 is a schematic diagram of an entity structure of a virtual reality device according to the present invention.
  • a virtual reality device provided by an embodiment of the present invention includes:
  • processor 510 a communication interface 520, a memory 530, and a bus 540;
  • the processor 510, the communication interface 520, and the memory 530 complete communication with each other through the bus 540;
  • the communication interface 520 is used for information transmission between the virtual reality device and the server;
  • the processor 510 is configured to invoke logic instructions in the memory 530 to perform the following methods;
  • an embodiment of the present invention further provides a computer program, including program code, where the program code is used to perform the following operations:
  • the embodiment of the invention further provides a storage medium for storing the calculation as described in the foregoing embodiment.
  • Machine program
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备,该基于虚拟现实设备的视锥体裁剪方法包括:确定人体左眼的第一视场角和右眼的第二视场角(S11);获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体(S12);根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪(S13)。所述基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备,有效地减少了视锥体裁剪过程中的计算量,提高了渲染效率,减少了采用传统视锥体裁剪造成的渲染延迟。

Description

一种基于虚拟现实设备的视锥体裁剪方法及装置
交叉引用
本申请应用于2015年11月26日提交的专利名称为“一种基于虚拟现实设备的视锥体裁剪方法及装置”的第201510844979.5号中国专利申请。
技术领域
本发明涉及计算机图形技术,尤其涉及一种基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备。
背景技术
虚拟现实技术(VR)是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,该模拟环境是一种多源信息融合的交互式的3D场景和实体行为的系统仿真,能够使用户沉浸到该环境中。
视锥体,是指场景中摄像机的一个可见锥体范围。由于透视变换的缘故,计算机应用的视锥体是一个四棱台观测金字塔,由上底、下底、左、右、前、后共6个面围成。在视锥体内的景物可见,反之则不可见。当人眼观察一个场景时,在视锥体之外的物体是看不见的,因此可以在显示前,将不可见的场景剔除掉,且不会对场景渲染造成影响。这样,渲染场景过程中,在视锥体中的所有顶点数据都是可见的,而在视锥体之外的场景数据是不可见的。视锥体裁剪就是在顶点数据送至渲染管线之前,将这些不可见的场景数据剔除掉。
当前在基于手机的虚拟现实(VR)方案中,采用根据头部的移动计算出的左、右两只眼的视场角分别对3D场景进行视锥体裁剪的方式,实现虚拟现实设备的视锥体裁剪。
但是,在实现本发明过程中,发明人发现现有技术至少存在以下问题:
现有技术中,需要根据头部的移动计算出左右两只眼的视场角,并根据左右两只眼的视场角分别对3D场景进行视锥体裁剪,因此,需要进行两次裁剪计算,不仅裁剪复杂,而且两次视锥体裁剪后的几何体进行渲染时,存 在渲染延迟,进而带来显示延迟问题。
发明内容
本发明实施例提供一种基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备,以解决现有技术中对3D场景进行视锥体裁剪过程中,需要进行两次裁剪计算带来的延迟问题,快速而方便地实现对VR 3D场景的剪裁。
本发明实施例提供一种基于虚拟现实设备的视锥体裁剪方法,包括:
确定人体左眼的第一视场角和右眼的第二视场角;
获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
本发明实施例提供一种基于虚拟现实设备的显示方法,包括:
获取由上述所述的基于虚拟现实设备的视锥体裁剪方法裁剪后的待呈现的3D场景中的几何体;
对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
显示渲染绘制后的待呈现的3D场景中几何体。
本发明实施例提供一种基于虚拟现实设备的视锥体裁剪装置,包括:
确定模块,用于确定人体左眼的第一视场角和右眼的第二视场角;
获取模块,用于获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
处理模块,用于根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
本发明实施例提供一种虚拟现实设备,所述虚拟现实设备包括获取单元、渲染单元、显示单元以及如上所述的基于虚拟现实设备的视锥体裁剪装置;
所述获取单元,用于获取由所述基于虚拟现实设备的视锥体裁剪装置裁剪后的待呈现的3D场景中的几何体;
所述渲染单元,用于对所述获取单元获取的裁剪后的待呈现的3D场景中几何体进行渲染绘制;
所述显示单元,用于显示所述渲染单元渲染绘制后的待呈现的3D场景 中几何体。
本发明实施例提供一种虚拟现实设备,包括:
处理器、存储器、通信接口和总线;其中,
所述处理器、存储器、通信接口通过所述总线完成相互间的通信;
所述通信接口用于该虚拟现实设备与服务器之间的信息传输;
所述处理器用于调用所述存储器中的逻辑指令,以执行如下方法;
确定人体左眼的第一视场角和右眼的第二视场角;获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;显示渲染绘制后的待呈现的3D场景中几何体。
本发明实施例提供一种计算机程序,包括程序代码,所述程序代码用于执行如下操作:
确定人体左眼的第一视场角和右眼的第二视场角;
获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;
对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
显示渲染绘制后的待呈现的3D场景中几何体。
本发明实施例提供一种存储介质,用于存储如前所述的计算机程序。
本发明实施例提供的基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备,通过将人体左、右眼视场角的并集区域作为人体的真实视锥体,根据该真实视锥体做视锥体剪裁,大幅减少绘制几何体数据量,进而减少视锥体裁剪过程中的计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
附图说明
图1为本发明基于虚拟现实设备的视锥体裁剪方法实施例流程图;
图2为本发明基于虚拟现实设备的显示方法实施例流程图;
图3为本发明基于虚拟现实设备的视锥体裁剪装置实施例结构示意图;
图4为本发明虚拟现实设备实施例结构示意图;
图5为本发明一种虚拟现实设备的实体结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非被特定定义,否则不会用理想化或过于正式的含义来解释。
图1示出了本发明实施例的一种基于虚拟现实设备的视锥体裁剪方法的流程图。
参照图1,本发明实施例提出的基于虚拟现实设备的视锥体裁剪方法,具体包括以下步骤:
S11、确定人体左眼的第一视场角和右眼的第二视场角;
在实际应用中,采用虚拟现实设备进行VR体验时,人体的左右眼的视场角是不同的,为了实现VR3D场景的视锥体裁剪,需要预先得到人体左眼的第一视场角和右眼的第二视场角。
需要说明的是,本实施例中的虚拟现实设备为具有虚拟现实功能的智能设备,如VR头盔以及VR眼镜等,本发明对此不做具体限定。
S12、获取所述第一视场角和第二视场角的并集区域,将所述并集区域作 为人体的视锥体;
具体的,根据步骤S11中确定的人体左眼的第一视场角和人体右眼的第二视场角,对左右眼两个视场角求并集,得到的并集区域即为左右眼的可视区域的合并部分,因此,可将得到的并集区域作为人体真实的视锥体。
S13、根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
本步骤中,通过根据已得到的人体真实的视锥体,对虚拟现实设备当前待呈现的3D场景中的几何体进行裁剪,解决了现有技术中对3D场景进行视锥体裁剪过程中,需要根据左右眼的视锥体分别进行两次裁剪计算带来的延迟问题,快速而方便地实现对VR 3D场景的剪裁。
本发明实施例,通过将人体左、右眼视场角的并集区域作为人体的真实视锥体,根据该真实视锥体做视锥体剪裁,大幅减少绘制几何体数据量,进而减少视锥体裁剪过程中的计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
进一步地,所述步骤S11中确定人体左眼的第一视场角和右眼的第二视场角,具体包括以下图中未示出的步骤:
S111、获取人体头部的空间状态信息以及当前虚拟现实设备的系统设置参数;
具体的,所述获取人体头部的空间状态信息,包括:
接收体感装置上传的人体头部的体感数据;
根据所述体感数据确定人体头部的空间状态信息。
其中,本实施例中的人体头部的空间状态信息,具体包括当前人体头部移动的方位信息、速度信息和位置信息。虚拟现实设备的系统设置参数包括:虚拟现实设备左右眼镜片之间的距离、左右眼镜片距离屏幕的距离、虚拟现实设备和左右眼镜片的尺寸与规格等参数信息。
需要说明的是,人体头部对应的方位信息可包括:头部在空间中三个维度的位移,即包括前后位移,上下位移、左右位移,或者是这些位移的组合等。
其中,本实施例中的体感装置包括指南针、陀螺仪、无线信号模块以及至少一个传感器以用于探测人体头部的体感数据。其中,所述传感器包括加速度传感器、方向传感器、磁力传感器、重力传感器、旋转矢量传感器、线 性加速度传感器中的一种或多种。
S112、根据所述系统设置参数以及所述人体头部的空间状态信息确定人体左眼的第一视场角和右眼的第二视场角。
具体的,根据人体头部移动的方位信息、速度信息和位置信息,结合虚拟现实设备的系统设置参数确定出人体左眼的第一视场角和人体右眼的第二视场角。
进一步地,所述步骤S13中根据所述视锥体对3D场景中的几何体进行裁剪,具体包括以下图中未示出的步骤:
S131、确定所述视锥体的六个平面对应的空间平面方程;
S132、根据所述空间平面方程判定所述3D场景中的几何体的每一点坐标与每一平面的位置关系;
S133、根据所述位置关系确定所述视锥体的裁剪平面;
S134、根据所述裁剪平面对所述视锥体进行裁剪。
实际应用中,通过计算出视锥体六个平面对应的空间平面方程,将3D场景中的几何体的每一点坐标分别代入六个面的平面方程做比较,则可以判断点是否在视锥体内。
下面对本发明实施例中的视锥体裁剪的具体实现方法进行详细的说明。
已知的,空间平面方程可表示为:Ax+By+Cz=0
相应的,对于点(x1,y1,z1),有
若Ax1+By1+Cz1=0,则点在平面上;
若Ax1+By1+Cz1<0,则点在平面的一侧;
若Ax1+By1+Cz1=0,则点在平面的另一侧;
首先,求视锥平面系数,确定视锥体的六个平面对应的空间平面方程。
本算法从世界、观察以及投影矩阵中计算出Viewing Frustum的六个面。它快速,准确,并且允许我们在相机空间(camera space)、世界空间(world space)或物体空间(object space)快速确定视景平面(Frustum planes)。
从投影矩阵(project)开始,假设世界矩阵(world)和观察矩阵(view)都是单位化了的矩阵。这就意味着相机位于世界坐标系下的原点,并且朝向Z轴的正方向。
定义一个顶点v(x y z w=1)和一个4*4的投影矩阵M=m(i,j),然后 使用该矩阵M对顶点v进行转换,转换后的顶点为v'=(x'y'z'w')。转换后,viewing frustum实际上就变成了一个与轴平行的盒子,如果顶点v'在这个盒子里,那么转换前的顶点v就在转换前的viewing frustum里。在3D程序接口OpenGL下,如果下面的几个不等式都成立的话,那么v'就在这个盒子里。
-w'<x'<w'
-w'<y'<w'
-w'<z'<w'
假设现在想测试x'是否在左半边空间,只需判断
-w<x'
用上面的信息,等式可以写成:
-(v·row4)<(v·row1)
0<(v·row4)+(v·row1)
0<v·(row4+row1)
得到转换前的viewing frustum的左裁剪面的平面方程:
x(m41+m11)+y(m42+m12)+z(m43+m13)+w(m44+m14)=0
当W=1,该左裁剪面的平面方程可简单成如下形式:
x(m41+m11)+y(m42+m12)+z(m43+m13)+(m44+m14)=0
得到一个基本平面方程:
ax+by+cz+d=0
其中,a=(m41+m11),b=(m42+m12),c=(m43+m13),d=(m44+m14)
即得到了左裁剪面。
重复以上几步,可推导出到其他的几个裁剪面。
进一步地,可得到以下结论:
1.如果矩阵M等于投影矩阵P(M=P),那么算法给出的裁剪面是在相机空间;
2.如果矩阵M等于观察矩阵V和投影矩阵P的组合(M=V*P),那么算法给出的裁剪面是在世界空间;
3.如果矩阵M等于世界矩阵W,观察矩阵V和投影矩阵P的组合(M=W*V*P),那么算法给出的裁剪面是在物体空间;
进一步地,判断节点是否在视锥内的步骤如下:
通过各种包围体方法求出近似包围体,对包围体上的各个点对视锥六个面作判断,存在以下三种情况:
如果所有顶点都在视锥范围内,则待判区域一定在视锥范围内;
如果只有部分顶点在视锥范围内,则待判区域与视锥体相交,我们同样视为可见;
如果所有顶点都不在视锥范围内,那么待判区域很可能不可见了,但有一种情况例外,就是视锥体在长方体以内,这种情况我们要加以区分。
图2示出了本发明实施例的一种基于虚拟现实设备的显示方法的流程图。
参照图2,本发明实施例提出的基于虚拟现实设备的显示方法,具体包括以下步骤:
S21、获取由上述任一实施例所述的基于虚拟现实设备的视锥体裁剪方法裁剪后的待呈现的3D场景中的几何体;
S22、对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
具体的,渲染绘制时只对与真实视锥体有交集的几何体进行绘制,对3D场景中裁剪后的几何体进行渲染绘制,渲染后进行反畸变和反色散处理并显示。
S23、显示渲染绘制后的待呈现的3D场景中几何体。
本发明实施例中,根据由人体左、右眼视场角的并集区域确定的真实视锥体对当前待呈现的3D场景中的几何体进行视锥体裁剪后,通过对裁剪后的待呈现的3D场景中几何体进行渲染绘制,并显示渲染绘制后的待呈现的3D场景中几何体,实现虚拟现实设备的显示。
本发明实施例提供的基于虚拟现实设备的显示方法,通过对左右眼视锥体统一处理后做视锥体剪裁,大幅减少绘制几何体数据量,减少了计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
另外,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
基于与方法同样的发明构思,本发明实施例还提供一种基于虚拟现实设备的视锥体裁剪装置,图3示出了本发明实施例的一种基于虚拟现实设备的视锥体裁剪装置的结构示意图。
参照图3,本发明实施例提出的基于虚拟现实设备的视锥体裁剪装置,具体包括确定模块201、获取模块202以及处理模块203,其中:
所述的确定模块201,用于确定人体左眼的第一视场角和右眼的第二视场角;
在实际应用中,采用虚拟现实设备进行VR体验时,人体的左右眼的视场角是不同的,为了实现VR3D场景的视锥体裁剪,需要预先得到人体左眼的第一视场角和右眼的第二视场角。
需要说明的是,本实施例中的虚拟现实设备为具有虚拟现实功能的智能设备,如VR头盔以及VR眼镜等,本发明对此不做具体限定。
所述的获取模块202,用于获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
获取模块,根据确定模块201确定的人体左眼的第一视场角和人体右眼的第二视场角,对左右眼两个视场角求并集,得到的并集区域即为左右眼的可视区域的合并部分,因此,可将得到的并集区域作为人体真实的视锥体。
所述的处理模块203,用于根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
本实施例中,处理模块通过根据已得到的人体真实的视锥体,对虚拟现实设备当前待呈现的3D场景中的几何体进行裁剪,解决了现有技术中对3D场景进行视锥体裁剪过程中,需要根据左右眼的视锥体分别进行两次裁剪计算带来的延迟问题,快速而方便地实现对VR 3D场景的剪裁。
本发明实施例,通过将人体左、右眼视场角的并集区域作为人体的真实视锥体,根据该真实视锥体做视锥体剪裁,大幅减少绘制几何体数据量,进而减少视锥体裁剪过程中的计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
进一步地,所述确定模块201,包括获取单元和第一确定单元,其中:
所述的获取单元,用于获取人体头部的空间状态信息以及当前虚拟现实设备的系统设置参数;
所述的第一确定单元,用于根据所述系统设置参数以及所述人体头部的空间状态信息确定人体左眼的第一视场角和右眼的第二视场角。
其中,所述获取单元,进一步包括接收子单元和确定子单元,其中:
所述的接收子单元,用于接收体感装置上传的人体头部的体感数据;
所述的确定子单元,用于根据所述接收子单元接收到的体感数据确定人体头部的空间状态信息。
进一步地,所述处理模块203,包括第二确定单元、判定单元、第三确定单元以及裁剪单元,其中:
第二确定单元,用于确定所述视锥体的六个平面对应的空间平面方程;
判定单元,用于根据所述空间平面方程判定所述3D场景中的几何体的每一点坐标与每一平面的位置关系;
第三确定单元,用于根据所述位置关系确定所述视锥体的裁剪平面;
裁剪单元,用于根据所述裁剪平面对所述视锥体进行裁剪。
此外,本发明实施例还提供一种虚拟现实设备,如图4所示,所述虚拟现实设备具体包括:获取单元10、渲染单元30、显示单元40以及如上述任一实施例所述的基于虚拟现实设备的视锥体裁剪装置20,其中;
所述获取单元10,用于获取所述基于虚拟现实设备的视锥体裁剪装置20裁剪后的待呈现的3D场景中的几何体。
所述渲染单元30,用于对所述获取单元获取10的裁剪后的待呈现的3D场景中几何体进行渲染绘制。
具体的,渲染单元30在渲染绘制时只对与真实视锥体有交集的几何体进行绘制,对3D场景中裁剪后的几何体进行渲染绘制,渲染后进行反畸变和反色散处理并显示。
所述显示单元40,用于显示所述渲染单元20渲染绘制后的待呈现的3D场景中几何体。
本发明实施例提供的虚拟现实设备,通过对左右眼视锥体统一处理后做视锥体剪裁,大幅减少绘制几何体数据量,减少了计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
具体的,渲染绘制时只对与真实视锥体有交集的几何体进行绘制,对3D场景中裁剪后的几何体进行渲染绘制,渲染后进行反畸变和反色散处理并显 示。
对于装置实施例而言,由于其与对应的方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
综上所述,本发明实施例提供的基于虚拟现实设备的视锥体裁剪方法、装置、显示方法及虚拟现实设备,通过将人体左、右眼视场角的并集区域作为人体的真实视锥体,根据该真实视锥体做视锥体剪裁,大幅减少绘制几何体数据量,进而减少视锥体裁剪过程中的计算量,提高了渲染效率,减少了采用传统视锥体剪裁造成的渲染延迟。
图5为本发明一种虚拟现实设备的实体结构示意图。
参照图5,本发明实施例提供的虚拟现实设备,包括:
处理器510、通信接口520、存储器530和总线540;其中,
所述处理器510、通信接口520、存储器530通过所述总线540完成相互间的通信;
所述通信接口520用于该虚拟现实设备与服务器之间的信息传输;
所述处理器510用于调用所述存储器530中的逻辑指令,以执行如下方法;
确定人体左眼的第一视场角和右眼的第二视场角;获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;显示渲染绘制后的待呈现的3D场景中几何体。
参照图2,本发明实施例还提供一种计算机程序,包括程序代码,所述程序代码用于执行如下操作:
确定人体左眼的第一视场角和右眼的第二视场角;
获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;
对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
显示渲染绘制后的待呈现的3D场景中几何体。
本发明实施例还提供一种存储介质,用于存储如前述实施例所述的计算 机程序。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (13)

  1. 一种基于虚拟现实设备的视锥体裁剪方法,其特征在于,包括:
    确定人体左眼的第一视场角和右眼的第二视场角;
    获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
    根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
  2. 根据权利要求1所述的方法,其特征在于,所述确定人体左眼的第一视场角和右眼的第二视场角,包括:
    获取人体头部的空间状态信息以及当前虚拟现实设备的系统设置参数;
    根据所述系统设置参数以及所述人体头部的空间状态信息确定人体左眼的第一视场角和右眼的第二视场角。
  3. 根据权利要求2所述的方法,其特征在于,所述获取人体头部的空间状态信息,包括:
    接收体感装置上传的人体头部的体感数据;
    根据所述体感数据确定人体头部的空间状态信息。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述视锥体对3D场景中的几何体进行裁剪,包括:
    确定所述视锥体的六个平面对应的空间平面方程;
    根据所述空间平面方程判定所述3D场景中的几何体的每一点坐标与每一平面的位置关系;
    根据所述位置关系确定所述视锥体的裁剪平面;
    根据所述裁剪平面对所述视锥体进行裁剪。
  5. 一种基于虚拟现实设备的显示方法,其特征在于,包括:
    获取由权利要求1~4中任一项所述的基于虚拟现实设备的视锥体裁剪方法裁剪后的待呈现的3D场景中的几何体;
    对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
    显示渲染绘制后的待呈现的3D场景中几何体。
  6. 一种基于虚拟现实设备的视锥体裁剪装置,其特征在于,包括:
    确定模块,用于确定人体左眼的第一视场角和右眼的第二视场角;
    获取模块,用于获取所述第一视场角和第二视场角的并集区域,将所述 并集区域作为人体的视锥体;
    处理模块,用于根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪。
  7. 根据权利要求6所述的装置,其特征在于,所述确定模块,包括:
    获取单元,用于获取人体头部的空间状态信息以及当前虚拟现实设备的系统设置参数;
    第一确定单元,用于根据所述系统设置参数以及所述人体头部的空间状态信息确定人体左眼的第一视场角和右眼的第二视场角。
  8. 根据权利要求7所述的装置,其特征在于,所述获取单元,包括:
    接收子单元,用于接收体感装置上传的人体头部的体感数据;
    确定子单元,用于根据所述接收子单元接收到的体感数据确定人体头部的空间状态信息。
  9. 根据权利要求6所述的装置,其特征在于,所述处理模块,包括:
    第二确定单元,用于确定所述视锥体的六个平面对应的空间平面方程;
    判定单元,用于根据所述空间平面方程判定所述3D场景中的几何体的每一点坐标与每一平面的位置关系;
    第三确定单元,用于根据所述位置关系确定所述视锥体的裁剪平面;
    裁剪单元,用于根据所述裁剪平面对所述视锥体进行裁剪。
  10. 一种虚拟现实设备,其特征在于,所述虚拟现实设备包括获取单元、渲染单元、显示单元以及如权利要求6~9任一项所述的基于虚拟现实设备的视锥体裁剪装置;
    所述获取单元,用于获取由所述基于虚拟现实设备的视锥体裁剪装置裁剪后的待呈现的3D场景中的几何体;
    所述渲染单元,用于对所述获取单元获取的裁剪后的待呈现的3D场景中几何体进行渲染绘制;
    所述显示单元,用于显示所述渲染单元渲染绘制后的待呈现的3D场景中几何体。
  11. 一种虚拟现实设备,其特征在于,包括:
    处理器、存储器、通信接口和总线;其中,
    所述处理器、存储器、通信接口通过所述总线完成相互间的通信;
    所述通信接口用于该虚拟现实设备与服务器之间的信息传输;
    所述处理器用于调用所述存储器中的逻辑指令,以执行如下方法;
    确定人体左眼的第一视场角和右眼的第二视场角;获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;显示渲染绘制后的待呈现的3D场景中几何体。
  12. 一种计算机程序,其特征在于,包括程序代码,所述程序代码用于执行如下操作:
    确定人体左眼的第一视场角和右眼的第二视场角;
    获取所述第一视场角和第二视场角的并集区域,将所述并集区域作为人体的视锥体;
    根据所述视锥体对当前待呈现的3D场景中的几何体进行裁剪;
    对所述裁剪后的待呈现的3D场景中几何体进行渲染绘制;
    显示渲染绘制后的待呈现的3D场景中几何体。
  13. 一种存储介质,用于存储如权利要求12所述的计算机程序。
PCT/CN2016/082511 2015-11-26 2016-05-18 一种基于虚拟现实设备的视锥体裁剪方法及装置 WO2017088361A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/242,522 US20170154460A1 (en) 2015-11-26 2016-08-20 Viewing frustum culling method and device based on virtual reality equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510844979.5 2015-11-26
CN201510844979.5A CN105869214A (zh) 2015-11-26 2015-11-26 一种基于虚拟现实设备的视锥体裁剪方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/242,522 Continuation-In-Part US20170154460A1 (en) 2015-11-26 2016-08-20 Viewing frustum culling method and device based on virtual reality equipment

Publications (1)

Publication Number Publication Date
WO2017088361A1 true WO2017088361A1 (zh) 2017-06-01

Family

ID=56623781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/082511 WO2017088361A1 (zh) 2015-11-26 2016-05-18 一种基于虚拟现实设备的视锥体裁剪方法及装置

Country Status (3)

Country Link
US (1) US20170154460A1 (zh)
CN (1) CN105869214A (zh)
WO (1) WO2017088361A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089788B2 (en) * 2016-05-25 2018-10-02 Google Llc Light-field viewpoint and pixel culling for a head mounted display device
CN106780313A (zh) * 2016-12-28 2017-05-31 网易(杭州)网络有限公司 图像处理方法及装置
US10969740B2 (en) 2017-06-27 2021-04-06 Nvidia Corporation System and method for near-eye light field rendering for wide field of view interactive three-dimensional computer graphics
CN109725956B (zh) * 2017-10-26 2022-02-01 腾讯科技(深圳)有限公司 一种场景渲染的方法以及相关装置
GB2569176B (en) * 2017-12-08 2022-04-13 Displaylink Uk Ltd Processing visual information for display on a screen
CN108470368B (zh) * 2018-03-14 2022-04-22 北京奇艺世纪科技有限公司 一种虚拟场景中渲染对象的确定方法、装置及电子设备
US11373356B2 (en) 2018-03-28 2022-06-28 Robert Bosch Gmbh Method and system for efficient rendering of 3D particle systems for weather effects
US10535180B2 (en) 2018-03-28 2020-01-14 Robert Bosch Gmbh Method and system for efficient rendering of cloud weather effect graphics in three-dimensional maps
US10901119B2 (en) 2018-03-28 2021-01-26 Robert Bosch Gmbh Method and system for efficient rendering of accumulated precipitation for weather effects
CN110264393B (zh) * 2019-05-15 2023-06-23 联想(上海)信息技术有限公司 一种信息处理方法、终端和存储介质
CN110930307B (zh) * 2019-10-31 2022-07-08 江苏视博云信息技术有限公司 图像处理方法和装置
WO2022000260A1 (zh) * 2020-06-30 2022-01-06 深圳市大疆创新科技有限公司 地图的更新方法、装置、可移动平台及存储介质
CN112785530B (zh) * 2021-02-05 2024-05-24 广东九联科技股份有限公司 用于虚拟现实的图像渲染方法、装置、设备及vr设备
CN113345060A (zh) * 2021-06-01 2021-09-03 温州大学 数字孪生模型的渲染方法、视锥体剔除方法及系统
CN115423707B (zh) * 2022-08-31 2024-07-23 深圳前海瑞集科技有限公司 基于视锥体的点云滤波方法、机器人及机器人作业方法
CN116880723B (zh) * 2023-09-08 2023-11-17 江西格如灵科技股份有限公司 一种3d场景显示方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000460A (zh) * 2006-01-10 2007-07-18 钟明 一种环幕立体电影图像的制作方法
CN201210356Y (zh) * 2008-05-07 2009-03-18 上海海事大学 基于立体全景的虚拟船舶驾驶系统
CN102663805A (zh) * 2012-04-18 2012-09-12 东华大学 一种基于投影的视锥体裁剪的方法
CN104881870A (zh) * 2015-05-18 2015-09-02 浙江宇视科技有限公司 面向待观察点的监控实况启动方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080072A1 (en) * 2003-03-12 2006-04-13 Computer Associates Think, Inc. Optimized rendering of dynamic moving bodies
NL1035303C2 (nl) * 2008-04-16 2009-10-19 Virtual Proteins B V Interactieve virtuele reality eenheid.
US20100328428A1 (en) * 2009-06-26 2010-12-30 Booth Jr Lawrence A Optimized stereoscopic visualization
US8611015B2 (en) * 2011-11-22 2013-12-17 Google Inc. User interface
ITTO20111150A1 (it) * 2011-12-14 2013-06-15 Univ Degli Studi Genova Rappresentazione stereoscopica tridimensionale perfezionata di oggetti virtuali per un osservatore in movimento

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000460A (zh) * 2006-01-10 2007-07-18 钟明 一种环幕立体电影图像的制作方法
CN201210356Y (zh) * 2008-05-07 2009-03-18 上海海事大学 基于立体全景的虚拟船舶驾驶系统
CN102663805A (zh) * 2012-04-18 2012-09-12 东华大学 一种基于投影的视锥体裁剪的方法
CN104881870A (zh) * 2015-05-18 2015-09-02 浙江宇视科技有限公司 面向待观察点的监控实况启动方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CAPPS, MICHAEL VINCENT: "Shared-Frustum Stereo Rendering", THESIS (S.M.), 31 December 2000 (2000-12-31), pages 1 - 54, XP055598162 *

Also Published As

Publication number Publication date
US20170154460A1 (en) 2017-06-01
CN105869214A (zh) 2016-08-17

Similar Documents

Publication Publication Date Title
WO2017088361A1 (zh) 一种基于虚拟现实设备的视锥体裁剪方法及装置
US11645801B2 (en) Method for synthesizing figure of virtual object, electronic device, and storage medium
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
CN106502427B (zh) 虚拟现实系统及其场景呈现方法
US10366534B2 (en) Selective surface mesh regeneration for 3-dimensional renderings
JP2021166078A (ja) 表面モデル化システムおよび方法
WO2017113731A1 (zh) 360度全景显示方法、显示模块及移动终端
EP3155596B1 (en) 3d scanning with depth cameras using mesh sculpting
WO2017003769A1 (en) Low-latency virtual reality display system
EP4070177B1 (en) Systems and methods for providing a mixed-reality pass-through experience
WO2017107537A1 (zh) 虚拟现实设备及避障方法
US11830148B2 (en) Reconstruction of essential visual cues in mixed reality applications
WO2018188479A1 (zh) 基于增强现实的导航方法及装置
WO2019020608A1 (en) METHOD AND SYSTEM FOR PROVIDING A VIRTUAL REALITY EXPERIENCE BASED ON ULTRASONIC DATA
JP7573017B2 (ja) 深度情報を用いた高速3d再構築
WO2023093739A1 (zh) 一种多视图三维重建的方法
KR101631514B1 (ko) 전자기기에서 3차원 컨텐츠 생성 방법 및 장치
WO2023056840A1 (zh) 三维物体的显示方法、装置、设备及介质
WO2019148311A1 (zh) 信息处理方法和系统、云处理设备及计算机程序产品
CN110751026B (zh) 视频处理方法及相关装置
CN109816765B (zh) 面向动态场景的纹理实时确定方法、装置、设备和介质
CN113744411A (zh) 图像处理方法及装置、设备、存储介质
WO2019146194A1 (ja) 情報処理装置、及び情報処理方法
CN112967329B (zh) 图像数据优化方法、装置、电子设备及存储介质
TW202332263A (zh) 立體影像播放裝置及其立體影像產生方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867590

Country of ref document: EP

Kind code of ref document: A1