WO2017092307A1 - 模型渲染方法及装置 - Google Patents

模型渲染方法及装置 Download PDF

Info

Publication number
WO2017092307A1
WO2017092307A1 PCT/CN2016/088716 CN2016088716W WO2017092307A1 WO 2017092307 A1 WO2017092307 A1 WO 2017092307A1 CN 2016088716 W CN2016088716 W CN 2016088716W WO 2017092307 A1 WO2017092307 A1 WO 2017092307A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
object model
coordinate system
model
coordinate
Prior art date
Application number
PCT/CN2016/088716
Other languages
English (en)
French (fr)
Inventor
许小飞
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/247,509 priority Critical patent/US20170154469A1/en
Publication of WO2017092307A1 publication Critical patent/WO2017092307A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Definitions

  • the present application relates to the field of virtual reality technologies, and in particular, to a model rendering method and apparatus.
  • Virtual reality refers to the use of computer technology as the core of high-tech means to generate realistic visual, auditory, tactile and other integrated virtual environment.
  • the user can also interact with objects in the virtual reality through the display terminal.
  • Model rendering refers to a process in which a display terminal acquires a three-dimensional model of a virtual reality scene and draws it according to model data to display a virtual reality scene.
  • the created three-dimensional model usually includes a virtual object model of a plurality of virtual objects, and the rendering of the plurality of virtual object models.
  • the order will affect the final rendering effect, and the rendered model will obscure the model that is rendered first. Therefore, how to provide an effective model rendering method to improve the rendering display effect has become an urgent need for technical personnel in the art to solve the technical problem. .
  • the embodiment of the invention provides a model rendering method and device, which are used to solve the technical problem that the model rendering display effect is poor in the prior art.
  • the embodiment of the invention further provides a model rendering device.
  • An embodiment of the present invention provides a model rendering method, including:
  • Each virtual object model located in the frustum is sequentially rendered in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • An embodiment of the present invention provides a model rendering apparatus, including:
  • a model acquisition module configured to acquire a virtual object model of each virtual object created for the virtual reality scene
  • a model transformation module for converting a coordinate vector of each virtual object model in a local coordinate system into a coordinate vector in a camera coordinate system
  • a view determining module configured to create a view cone of the virtual reality scene, obtain each virtual object to be located in the view cone according to a coordinate vector of each virtual object model in a camera coordinate system and the view cone model;
  • a model rendering module configured to sequentially render each virtual object model located in the view cone in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • An embodiment of the present invention provides a device for rendering a model, including: a memory, a processor, where
  • the memory is configured to store one or more instructions, wherein the one or more instructions are for execution by the processor;
  • the processor is configured to acquire a virtual object model of each virtual object created for the virtual reality scene
  • a view cone for creating the virtual reality scene according to a coordinate vector of each virtual object model in a camera coordinate system and the view cone, obtaining respective virtual object models to be located in the view cone;
  • each of the virtual object models located in the view cone is sequentially rendered in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the method and device for rendering a model converts each virtual object model of the acquired virtual reality scene into a camera coordinate system, and by creating a view cone, only the virtual object model located in the view cone is rendered, thereby The rendering efficiency can be improved.
  • Each virtual object is sequentially rendered in the order of distance from the camera, so that the virtual object model close to the camera is rendered and not occluded, thereby improving the rendering effect of the rendering.
  • FIG. 1 is a flow chart of an embodiment of a model rendering method of the present invention
  • FIG. 2 is a flow chart of still another embodiment of a model rendering method of the present invention.
  • FIG. 3 is a schematic structural view of an embodiment of a model rendering apparatus according to the present invention.
  • FIG. 4 is a schematic structural diagram of still another embodiment of a model rendering apparatus according to the present invention.
  • FIG. 5 is a schematic diagram of connection of an embodiment of a model rendering device according to the present invention.
  • the technical solutions of the embodiments of the present invention are mainly applied to a display terminal, a computer, a mobile phone, a tablet computer, a wearable device, and the like.
  • the display terminal after acquiring the virtual object models in the virtual reality scene module, the display terminal first converts each virtual object model into coordinates, converts the camera coordinate system, and then creates a view cone, which is only located in the cone.
  • the virtual object model in the body is rendered, projected from the camera coordinate system to the two-dimensional screen, and each virtual object is rendered, and the virtual object model not in the view cone is discarded without rendering, thereby improving rendering efficiency, and each virtual object is followed.
  • the distance camera is rendered in order from far and near, so that the virtual object model close to the camera is rendered and not occluded, thereby improving the rendering effect of the rendering.
  • FIG. 1 is a flowchart of an embodiment of a model rendering method provided by the present invention, which may include the following steps:
  • the creation of the virtual object model is the same as the prior art and will not be described here.
  • the virtual object model may include a seat model, a viewing screen model, and the like; and in the case of a beach scene, each virtual object model in the beach scene may include water, a yacht, a parasol, sand, etc. model.
  • the camera coordinate system is also the eye coordinate system, which is the visual space of the camera (or the human eye) to observe the object.
  • the virtual object model Since the virtual object model is created in the local coordinate system, in order to realize the display of the virtual object, the virtual object model needs to be converted into the camera coordinate system. So the specific is to be each virtual
  • the quasi-object model transforms the coordinate vector in the local coordinate system into a coordinate vector in the camera coordinate system.
  • the coordinate vector may correspond to the coordinates of any point in the virtual object model.
  • the coordinates of the center point of the corresponding virtual object model may be specifically.
  • the coordinate vector of the virtual object model is converted into a coordinate vector in the camera coordinate system, which can be converted by matrix transformation.
  • 103 Create a view cone of the virtual reality scene, and obtain a virtual object model to be located in the view cone according to a coordinate vector of each virtual object model in a camera coordinate system and the view cone.
  • the view cone can be represented by a matrix, that is, a projection matrix, so that the virtual object model located in the view cone can be obtained by the coordinate vector in the camera coordinate system and the projection matrix of the view cone according to each virtual object model. .
  • the virtual object models located in the view cone are sequentially rendered in a far and near order according to the distance from the camera to display the virtual reality scene.
  • the virtual object model located in the frustum is determined, and the virtual object model located in the frustum is sequentially rendered in a far and near order from the camera position, that is, the virtual object model is projected from the camera coordinate system onto the two-dimensional screen.
  • the graphic is rendered on the two-dimensional screen, that is, the display of the virtual reality scene can be realized.
  • each virtual object model in the virtual reality scene is converted into a camera coordinate system by coordinate conversion, and then a view cone is created, and only the virtual object model located in the view cone is rendered, and projected from the camera coordinate system.
  • each virtual object is rendered, and the virtual object model not in the view cone is discarded without rendering, thereby improving the rendering efficiency, and each virtual object is sequentially rendered in the order of the distance from the camera.
  • the virtual object model close to the camera is rendered and not occluded, thereby improving the rendering effect of the rendering.
  • the coordinate system usually has the following types: local coordinate system, world coordinate system, camera coordinate system, and screen coordinate system; since the virtual object model is created in the local coordinate system in the virtual reality scene, it can be transferred by matrix transformation. Go to the world coordinate system and then convert to the camera coordinate system through the view transform. Project from the camera coordinate system to render a virtual object on a two-dimensional screen.
  • the view cone can be represented by a projection matrix.
  • the coordinate vector of the virtual object model in the camera coordinate system and the projection matrix can be obtained by the projection transformation, and the crop coordinate vector in the crop coordinate system can be obtained. Whether the virtual object model is within the view cone and is close to the camera position.
  • FIG. 2 it is a flowchart of still another embodiment of a model rendering method provided by the present invention, which may include the following steps:
  • the model matrix represents the transformation information of the virtual object model, including rotation, translation, and scaling.
  • the transformation information of the rotation, translation, and scaling of each virtual object model in the world coordinate system may first be represented as a model matrix in the world coordinate system.
  • the coordinate vector of each virtual object model in the local coordinate system and the model matrix are transformed by a model to obtain a coordinate vector of the virtual object model in the world coordinate system.
  • the coordinate vector in the world coordinate system may be obtained by model transformation according to the following model transformation formula
  • the coordinate vector of the virtual object model in the local coordinate system That is, the coordinate vector of the virtual object model in the world coordinate system
  • Mmodel.transport() represents the transposed matrix of the model matrix
  • (Xobj, Yobj, Zobj) represents the coordinates of the virtual object model in the local coordinate system, specifically the coordinates of the center point, and Wobj is the homogeneous coordinate in the local coordinate system, where Wobj is 0;
  • (Xworld, Yworld , Zworld) represents the coordinates in the world coordinate system, and Wworld is the homogeneous coordinate in the world coordinate system.
  • the model matrix is the product of the translation matrix, the scaling matrix and the rotation matrix.
  • x 1 , y 1 , z 1 are the distances moved along the x, y, and z axes in the world coordinate system.
  • the scaling matrix is a
  • x 2 , y 2 , z 2 are the sizes scaled along the x, y, and z axes of the world coordinate system.
  • the rotation matrix is the product of a matrix rotated around the three coordinate axes x, y, z of the world coordinate system:
  • Homogeneous coordinates are one of the important means of computer graphics. They can be used to distinguish between vectors and points, and are also easier to use for affine (linear) geometric transformation.
  • the local coordinate system can be transformed to the world coordinate system by model transformation.
  • the world coordinate system can be converted to the camera coordinate system by view transformation.
  • the camera is represented in three dimensions by camera position, camera orientation vector, and camera forward vector, so the view matrix can be obtained from camera position, camera orientation vector, and camera forward vector.
  • the view matrix may be an inverse matrix of the model matrix that the user is considered to be a model and the obtained user transforms in the world coordinate system.
  • the coordinate vector of each virtual object model in the world coordinate system and the view matrix are subjected to view transformation to obtain a coordinate vector in the camera coordinate system, which may include:
  • ViewMatrix.transport() represents the transposed matrix of the view matrix.
  • the view matrix can be obtained by the following formula:
  • the camera position is Vector3eye; the camera orientation vector is ector3at; the camera forward vector is Vector3up
  • the final calculated view matrix is
  • the cross in the above code represents the cross product, and normalize means normalization.
  • the size of the cropping plane is defined by left (left), right (right), bootom (top), top (bottom), zNear (front) and zFar (post) define the distance of the camera to the two clipping planes. From these six parameters, a cone composed of six cutting faces can be defined, which is a view cone, also called a view body.
  • the frustum can be represented by a projection matrix based on the six parameters of the frustum.
  • the coordinate vector of the virtual object model in the camera coordinate system and the projection matrix may be subjected to projection transformation according to a projection transformation formula to obtain a cropping coordinate vector of the virtual object model;
  • ProjectionMatrix.transport() represents the transposed matrix of the projection matrix, (Xclip, Yclip, Zclip, Wclip) is the cropping coordinates; Wclip is the homogeneous coordinate in the cropping coordinates.
  • Wclip can represent the distance of the virtual object model from the view cone.
  • the virtual object model in which the homogeneous coordinates are not 0 can be determined to be located in the frustum according to the homogeneous coordinates in the cropped coordinate vector.
  • the size of the Wclip indicates the distance of each virtual object model in the frustum from the camera position. The larger the value of Wclip, the farther the virtual object model is from the camera position.
  • each virtual object model located in the frustum is arranged in descending order of homogeneous coordinate values, that is, the view cone can be obtained.
  • the order of each virtual object model in the body from the far and near position of the camera.
  • Each virtual object model located in the view cone is sequentially rendered in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the virtual object models that are to be located in the view cone are sequentially rendered in order of the coordinate values from the largest to the smallest to display the virtual reality scene.
  • the virtual object model is transformed into a model view to obtain a coordinate vector of the virtual object model in the camera coordinate system, and a virtual object model located in the view cone is obtained by creating a view cone, through the view cone
  • the projection matrix and the coordinate vector of the virtual object model in the camera coordinate system can obtain the clipping coordinate vector of the virtual object model.
  • the virtual object model located in the view cone can be determined, and the virtual inside the cone can be determined.
  • the distance of the object model from the camera position, so that the virtual object model located in the view cone is in accordance with the distance camera The position is rendered in the order of far and near, which can improve the rendering efficiency, and can also render the virtual object model close to the camera after rendering, so as not to be occluded, and improve the rendering effect of the rendering.
  • FIG. 3 is a schematic structural diagram of an embodiment of a model rendering apparatus according to the present invention.
  • the apparatus may include:
  • the model obtaining module 301 is configured to acquire a virtual object model of each virtual object created for the virtual reality scene.
  • the model transformation module 302 is configured to convert each virtual object model into a coordinate vector in a local coordinate system into a coordinate vector in a camera coordinate system.
  • the camera coordinate system is also the eye coordinate system, which is the visual space of the camera (or the human eye) to observe the object.
  • the virtual object model Since the virtual object model is created in the local coordinate system, in order to realize the display of the virtual object, the virtual object model needs to be converted into the camera coordinate system. Therefore, it is specific to convert each virtual object model into a coordinate vector in a local coordinate system into a coordinate vector in a camera coordinate system.
  • the coordinate vector may correspond to the coordinates of any point in the virtual object model.
  • the coordinates of the center point of the corresponding virtual object model may be specifically.
  • the coordinate vector of the virtual object model is converted into a coordinate vector in the camera coordinate system, which can be converted by matrix transformation.
  • a visor determining module 303 configured to create a view cone of the virtual reality scene, and obtain each virtual space to be located in the view cone according to a coordinate vector of each virtual object model in a camera coordinate system and the view cone Object model.
  • the view cone can be represented by a matrix, that is, a projection matrix, so that the virtual object model located in the view cone can be obtained by the coordinate vector in the camera coordinate system and the projection matrix of the view cone according to each virtual object model. .
  • the model rendering module 304 is configured to sequentially render each virtual object model located in the view cone in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the virtual object model located in the frustum is determined, and the virtual object model located in the frustum is sequentially rendered in a far and near order from the camera position, that is, the virtual object model is projected from the camera coordinate system onto the two-dimensional screen.
  • the graphic is rendered on the two-dimensional screen, that is, the display of the virtual reality scene can be realized.
  • each virtual object model in the virtual reality scene is converted into a camera coordinate system by coordinate conversion, and then a view cone is created, and only the virtual object model located in the view cone is rendered, and projected from the camera coordinate system.
  • each virtual object is rendered, and the virtual object model not in the view cone is discarded without rendering, thereby improving the rendering efficiency, and each virtual object is sequentially rendered in the order of the distance from the camera.
  • the virtual object model close to the camera is rendered and not occluded, thereby improving the rendering effect of the rendering.
  • the model transformation module 302 may include:
  • a model transformation unit 401 configured to transform a coordinate vector of each virtual object model in a local coordinate system and a model matrix in a world coordinate system, and obtain a coordinate vector in the world coordinate system;
  • the view transformation unit 402 is configured to obtain a coordinate vector in the camera coordinate system by transforming the coordinate vector of each virtual object model in the world coordinate system and the view matrix.
  • the virtual object model is created in the local coordinate system in the virtual reality scene, it can be transferred to the world coordinate system through matrix transformation, and then converted to the camera coordinate system through the view transformation.
  • model transformation unit may be specifically configured to:
  • each virtual object model in the world coordinate system is represented as a model matrix in the world coordinate system;
  • each virtual object model is a coordinate vector in the local coordinate system and the model matrix are obtained by a model transformation according to a model transformation to obtain a coordinate vector in the world coordinate system;
  • the coordinate vector of the virtual object model in the local coordinate system That is, the coordinate vector of the virtual object model in the world coordinate system
  • Mmodel.transport() represents the transposed matrix of the model matrix
  • Wobj is the homogeneous coordinate of the virtual object model in the local coordinate system
  • Wworld is the virtual object model. Homogeneous coordinates in the world coordinate system
  • the view transformation unit may be specifically configured to:
  • ViewMatrix.transport() represents the transposed matrix of the view matrix.
  • the view determining module 303 may include:
  • a creating unit 403 is configured to create a view cone of the virtual reality scene and obtain a projection matrix of the view cone.
  • the size of the cropping plane is defined by left (left), right (right), bootom (top), top (bottom), zNear (front) and zFar (post) define the distance of the camera to the two clipping planes. From these six parameters, a cone composed of six cutting faces can be defined, which is a view cone, also called a view body.
  • the frustum can be represented by a projection matrix based on the six parameters of the frustum.
  • the projection transformation unit 404 is configured to perform projection transformation on the coordinate vector of each virtual object model in the camera coordinate system and the projection matrix to obtain a crop coordinate vector of the virtual object model.
  • the coordinate vector of the virtual object model in the camera coordinate system and the projection matrix may be subjected to projection transformation according to a projection transformation formula to obtain a cropping coordinate vector of the virtual object model;
  • ProjectionMatrix.transport() represents the transposed matrix of the projection matrix, (Xclip, Yclip, Zclip, Wclip) is the cropping coordinate; Wclip is the homogeneous coordinate in the cropping coordinates.
  • the model determining unit 405 is configured to obtain a virtual object model located in the view cone according to the cropped coordinate vector.
  • Wclip can represent the distance of the virtual object model from the view cone.
  • the virtual object model in which the homogeneous coordinates are not 0 can be determined to be located in the frustum according to the homogeneous coordinates in the cropped coordinate vector.
  • the model rendering module 304 can include:
  • the sequence determining unit 406 is configured to obtain, according to the cropping coordinate vector, a far-and-near sequence of the distances of the respective virtual object models located in the frustum from the camera position.
  • the size of the Wclip indicates the distance of each virtual object model in the frustum from the camera position. The larger the value of Wclip, the farther the virtual object model is from the camera position.
  • each virtual object model located in the frustum is arranged in descending order of homogeneous coordinate values, that is, the view cone can be obtained.
  • the order of each virtual object model in the body from the far and near position of the camera.
  • the model rendering unit 407 is configured to sequentially render each virtual object model located in the view cone in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the virtual object models that are to be located in the view cone are sequentially rendered in order of the coordinate values from the largest to the smallest to display the virtual reality scene.
  • the virtual object model is transformed into a model view to obtain a coordinate vector of the virtual object model in the camera coordinate system, and a virtual object model located in the view cone is obtained by creating a view cone, through the view cone
  • the projection matrix and the coordinate vector of the virtual object model in the camera coordinate system can obtain the clipping coordinate vector of the virtual object model.
  • the virtual object model located in the view cone can be determined, and the virtual inside the cone can be determined.
  • the object model is far from the camera position, so that the virtual object model located in the view cone is sequentially rendered in the order of distance from the camera position, that is, the rendering efficiency can be improved, and the virtual object model close to the camera can be made. Rendering, not occluded, improve rendering performance fruit.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • FIG. 5 is a schematic diagram of a connection between a model rendering device according to an embodiment of the present invention.
  • the device includes: a memory 501 and a processor 502, where
  • the memory 501 is configured to store one or more instructions, where the one or more instructions are used by the processor 502 to invoke execution;
  • the processor 502 is configured to acquire a virtual object model of each virtual object created for the virtual reality scene
  • a view cone for creating the virtual reality scene according to a coordinate vector of each virtual object model in a camera coordinate system and the view cone, obtaining respective virtual object models to be located in the view cone;
  • each of the virtual object models located in the view cone is sequentially rendered in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the processor 502 is further configured to:
  • the coordinate vector of each virtual object model in the local coordinate system and the model matrix are transformed by the model to obtain a coordinate vector in the world coordinate system;
  • the coordinate vector of each virtual object model in the world coordinate system and the view matrix are subjected to view transformation to obtain a coordinate vector in the camera coordinate system.
  • the processor 502 is further configured to:
  • Each virtual object model located in the frustum is sequentially rendered in a far and near order according to a distance from the camera position to display the virtual reality scene.
  • the processor 502 is further configured to:
  • Coordinate vectors of each virtual object model in a local coordinate system and the model matrix are obtained according to the following model transformation formula, and a coordinate vector in the world coordinate system is obtained through model transformation;
  • the coordinate vector of the virtual object model in the local coordinate system That is, the coordinate vector of the virtual object model in the world coordinate system
  • Mmodel.transport() represents the transposed matrix of the model matrix
  • Wobj is the homogeneous coordinate of the virtual object model in the local coordinate system
  • Wworld is the virtual object model. Homogeneous coordinates in the world coordinate system
  • the coordinate vector of each virtual object model in the world coordinate system and the view matrix of the camera coordinate system are subjected to view transformation to obtain a coordinate vector in the camera coordinate system, including:
  • ViewMatrix.transport() represents the transposed matrix of the view matrix.
  • the processor 502 is further configured to:
  • ProjectionMatrix.transport() represents a transposed matrix of the projection matrix, and Wclip is a homogeneous coordinate in the cropped coordinate vector;
  • Obtaining a virtual object model located in the frustum according to the cropping coordinate vector Types include:
  • obtaining a far-and-near sequence of each virtual object model located in the frustum from a camera position includes:
  • the virtual object models that are located in the view cone are sequentially rendered in a far and near order according to the distance from the camera to display the virtual reality scene, including:
  • Each virtual object model located in the frustum is sequentially rendered in a sorted order of homogeneous coordinate values to display the virtual reality scene.
  • the processor 502 in the device is a specific implementation of the device in FIG. 4, and the specific functions and effects thereof can also be referred to the description of the device in FIG. 4.
  • the method and device for rendering a model converts each virtual object model of the acquired virtual reality scene into a camera coordinate system, and by creating a view cone, only the virtual object model located in the view cone is rendered, thereby The rendering efficiency can be improved.
  • Each virtual object is sequentially rendered in the order of distance from the camera, so that the virtual object model close to the camera is rendered and not occluded, thereby improving the rendering effect of the rendering.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

提供一种模型渲染方法及装置,所述方法包括:获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型(101);将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量(102);创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型(103);将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染(104),以显示所述虚拟现实场景。提高了模型渲染效率,提高了模型渲染显示效果。

Description

模型渲染方法及装置
交叉引用
本发明引用于2015年12月1日递交的名称为“模型渲染方法及装置”的第201510870852.0号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及虚拟现实技术领域,特别涉及一种模型渲染方法及装置。
背景技术
虚拟现实是指采用计算机技术为核心的高科技手段生成逼真的视觉、听觉、触觉等一体化的虚拟环境。用户还可可以通过显示终端,实现与虚拟现实中的物体交互。
为了实现虚拟现实,需要对虚拟现实场景进行数字化描述,建立虚拟现实场景的三维模型。
模型渲染是指显示终端获取虚拟现实场景的三维模型,根据模型数据进行绘制,以显示虚拟现实场景的过程。
发明人在实现本发明的过程中发现:由于在一个虚拟现实场景中会包括多个虚拟物体,因此创建的三维模型中通常包括多个虚拟物体的虚拟物体模型,而多个虚拟物体模型的渲染顺序会影响最终的渲染显示效果,后进行渲染的模型会遮挡住先进行渲染的模型,因此如何提供一种有效的模型渲染方式,以提高渲染显示效果,成为本领域技术人员迫切需要解决技术问题。
发明内容
本发明实施例提供一种模型渲染方法及装置,用以解决现有技术中模型渲染显示效果较差的技术问题。
本发明实施例还提供一种模型渲染设备。
本发明实施例提供一种模型渲染方法,包括:
获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
将每一个虚拟物体模型在局部坐标系中的坐标向量转换为在相机坐标系中的坐标向量;
创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
本发明实施例提供一种模型渲染装置,包括:
模型获取模块,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
模型变换模块,用于将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量;
视景确定模块,用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
模型渲染模块,用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
本发明实施例提供一种模型渲染上设备,包括:包括:内存、处理器,其中,
所述内存,用于存储一条或多条指令,其中,所述一条或多条指令以供所述处理器调用执行;
所述处理器,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
用于将每一个虚拟物体模型在局部坐标系中的坐标向量转换为在相机坐标系中的坐标向量;
用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
本发明实施例提供的模型渲染方法及装置,通过将获取的虚拟现实场景的各个虚拟物体模型转换相机坐标系中,通过创建视锥体,仅将位于视锥体内的虚拟物体模型进行渲染,从而可以提高渲染效率,各个虚拟物体是按照距离摄像机由远及近的顺序依次进行渲染,使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,从而可以提高渲染的显示效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明模型渲染方法一个实施例流程图;
图2为本发明模型渲染方法又一个实施例流程图;
图3为本发明模型渲染装置一个实施例结构示意图;
图4为本发明模型渲染装置又一个实施例结构示意图;
图5为本发明模型渲染设备一个实施例连接示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述, 显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例的技术方案主要应用于显示终端中,计算机、手机、平板电脑、穿戴式设备等。
在本发明实施例中,显示终端在获取虚拟现实场景模块中的各个虚拟物体模型之后,首先将各个虚拟物体模型通过坐标转换,转换相机坐标系中,然后创建视锥体,仅将位于视锥体内的虚拟物体模型进行渲染,从相机坐标系投影至二维屏幕,渲染出各个虚拟物体,不在视锥体内的虚拟物体模型即舍弃不进行渲染,从而可以提高渲染效率,且各个虚拟物体是按照距离摄像机由远及近的顺序依次进行渲染,从而使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,从而可以提高渲染的显示效果。
图1为本发明提供的一种模型渲染方法一个实施例的流程图,该方法可以包括以下几个步骤:
101:获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型。
虚拟物体模型的创建与现有技术相同,在此不再赘述。
虚拟现实场景例如为影院场景时,虚拟物体模型可以包括座椅模型、观影屏幕模型等;又如为沙滩场景时,沙滩场景中的各个虚拟物体模型可以包括水、游艇、遮阳伞、沙子等模型。
102:将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量。
相机坐标系也即是指眼睛坐标系,是以相机的镜头(或者人的眼睛)来观察物体的视觉空间。
由于虚拟物体模型是在局部坐标系中进行创建的,为了实现虚拟物体的显示,需要将虚拟物体模型转换到相机坐标系中。因此具体的是将每一个虚 拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量。
该坐标向量可以对应虚拟物体模型中任一个点的坐标,为了计算准确性,具体可以是对应虚拟物体模型中心点坐标。
虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量具体可以通过矩阵变换进行转换。
103:创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的虚拟物体模型。
由于相机的视野不是无限远的,因此需要创建一个视锥体,在视锥体内的物体可以被投影到视平面,不在视锥体内的物体将被丢弃不处理。
视锥体可以用矩阵、也即投影矩阵进行表示,因此通过根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体的投影矩阵,即可以获得位于视锥体内的虚拟物体模型。
104:将位于所述视锥体内的虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
确定出位于视锥体内的虚拟物体模型,再将位于视锥体内的虚拟物体模型按照距离相机位置由远及近顺序依次进行渲染,也即将虚拟物体模型从相机坐标系投影至二维屏幕上,从而在二维屏幕上渲染出图形,即可以实现虚拟现实场景的显示。
本发明实施例中,将虚拟现实场景中的各个虚拟物体模型通过坐标转换,转换相机坐标系中,然后创建视锥体,仅将位于视锥体内的虚拟物体模型进行渲染,从相机坐标系投影至二维屏幕,渲染出各个虚拟物体,不在视锥体内的虚拟物体模型即舍弃不进行渲染,从而可以提高渲染效率,且各个虚拟物体是按照距离摄像机由远及近的顺序依次进行渲染,从而使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,从而可以提高渲染的显示效果。
坐标系通常有以下几种:局部坐标系、世界坐标系、相机坐标系、屏幕坐标系;由于虚拟现实场景中,虚拟物体模型的创建是在局部坐标系中进行的,因此可以通过矩阵变换转移到世界坐标系,接着通过视图变换转换到相机坐标系。从相机坐标系进行投影,在二维屏幕上渲染出虚拟物体。
而在进行投影之前,由于相机的视野是有限的,因此需要创建一个视锥体,用于表示相机的视野。视锥体可以以投影矩阵进行表示,将虚拟物体模型在相机坐标系中的坐标向量与投影矩阵,通过投影变换,可以得到在裁剪坐标系中的裁剪坐标向量,利用裁剪坐标向量,即可以确定虚拟物体模型是否位于视锥体内,以及距离相机位置的远近。
如图2所示,为本发明提供的一种模型渲染方法又一个实施例的流程图,该方法可以包括以下几个步骤:
201:获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型。
202:将每一个虚拟物体模型在局部坐标系中的坐标向量与模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量。
其中,模型矩阵表示了虚拟物体模型的变换信息,包括旋转、平移以及缩放等。
可以首先将每一个虚拟物体模型在世界坐标系中的旋转、平移以及缩放的变换信息,表示为在世界坐标系中的模型矩阵。
将每一个虚拟物体模型在局部坐标系中的坐标向量与所述模型矩阵,经过模型变换,获得所述虚拟物体模型在所述世界坐标系中的坐标向量。
可以是计算局部坐标系中的坐标向量与模型矩阵的乘积,即可以转换到世界坐标系中。
具体的,可以是按照下述模型变换公式,经过模型变换,获得在所述世界坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000001
其中,
Figure PCTCN2016088716-appb-000002
为虚拟物体模型在局部坐标系中的坐标向量,
Figure PCTCN2016088716-appb-000003
即为虚拟物体模型在世界坐标系中的坐标向量,Mmodel.transport()表示所述模型矩阵的转置矩阵。
其中,(Xobj,Yobj,Zobj)表示虚拟物体模型在局部坐标系中的坐标,具体是中心点的坐标,Wobj为在局部坐标系中的齐次坐标,其中,Wobj为0;(Xworld,Yworld,Zworld)表示在世界坐标系中的坐标,Wworld为在所述世界坐标系中齐次坐标。
其中,模型矩阵是平移矩阵,缩放矩阵和旋转矩阵的乘积。
平移矩阵为
Figure PCTCN2016088716-appb-000004
x1,y1,z1即为沿着世界坐标系中x,y,z轴移动的距离。
缩放矩阵为
Figure PCTCN2016088716-appb-000005
x2,y2,z2即为沿着世界坐标系x,y,z轴缩放的大小。
旋转矩阵为绕着世界坐标系的三个坐标轴x,y,z旋转的矩阵的乘积:
其中,沿x轴旋转角度大小为A的矩阵表示如下:
Figure PCTCN2016088716-appb-000006
其中,沿y轴旋转角度大小为A的矩阵表示如下:
Figure PCTCN2016088716-appb-000007
其中,沿z轴旋转角度大小为A的矩阵表示如下:
Figure PCTCN2016088716-appb-000008
齐次坐标是计算机图形学的重要手段之一,它既能够用来明确区分向量和点,同时也更易用于进行仿射(线性)几何变换。
203:将每一个虚拟物体模型在所述世界坐标系中的坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量。
局部坐标系通过模型变换,可以转换到世界坐标系。世界坐标系通过视图变换可以转换到相机坐标系中。
相机在三维空间中用相机位置、相机朝向向量以及相机向前向量,表示,因此,视图矩阵可以根据相机位置、相机朝向向量以及相机向前向量获得。
视图矩阵可以是将用户视为一个模型,获得的用户在世界坐标系中变换的模型矩阵的逆矩阵。
将每一个虚拟物体模型在所述世界坐标系中的坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量即可以包括:
将每一个虚拟物体模型在所述世界坐标系中的坐标向量与所述视图矩 阵,按照下述视频变换公式,经过视图变换,获得在所述相机坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000009
其中,
Figure PCTCN2016088716-appb-000010
表示虚拟物体模型在所述相机坐标系中的坐标向量;
Weye为虚拟物体模型在相机坐标系中的齐次坐标;ViewMatrix.transport()表示所述视图矩阵的转置矩阵。
其中,所述视图矩阵可以通过如下公式获得:
假设摄像机位置为Vector3eye;摄像机朝向向量为ector3at;摄像机向前向量为Vector3up
Vector3forward,side;
forward=at-eye;
normalize(forward);
side=cross(forward,up);
normalize(side);
up=cross(side,forward);
最终计算视图矩阵为
Figure PCTCN2016088716-appb-000011
上面代码中的cross表示叉积,normalize表示规范化。
204:创建所述虚拟现实场景的视锥体,并获得所述视锥体的投影矩阵。
通过left(左),right(右),bootom(上),top(下)定义裁剪面大小,zNear(前)和zFar(后)定义相机到远近两个裁剪面的距离。由这六个参数可以定义出六个裁剪面构成的锥体,即为视锥体,也称为视景体。
根据视锥体的六个参数,可以将视锥体用投影矩阵进行表示。
205:将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量。
具体的,可以是将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵按照投影变换公式,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
Figure PCTCN2016088716-appb-000012
其中,
Figure PCTCN2016088716-appb-000013
为裁剪坐标向量,ProjectionMatrix.transport()表示投影矩阵的转置矩阵,(Xclip,Yclip,Zclip,Wclip)为裁剪坐标;Wclip为所述裁剪坐标中的齐次坐标。
其中,假设top=t;Bottom=b;Left=l;Right=r;Near=n;Far=f
则所述投影矩阵为:
Figure PCTCN2016088716-appb-000014
206:根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模型。
Wclip即可以表示虚拟物体模型距离视锥体的远近。
如果Wclip为0,则表示对应的虚拟物体模型不在视锥体内。
因此,可以具体根据所述裁剪坐标向量中的齐次坐标,确定齐次坐标非0的虚拟物体模型位于所述视锥体内。
207:根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序。
Wclip的大小即表示视锥体内各个虚拟物体模型距离相机位置的远近,Wclip的值越大,表示虚拟物体模型距离相机位置越远。
具体的是,是根据所述裁剪坐标向量中的齐次坐标,将位于所述视锥体内的各个虚拟物体模型按照齐次坐标值由大到小的顺序排列,即可以获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序。
208:将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
也即将位于视锥体内的各个虚拟物体模型,按照齐次坐标值由大到小的排列顺序,依次进行渲染,以显示所述虚拟现实场景。
本发明实施例中,通过将虚拟物体模型进行模型视图变换,获得虚拟物体模型在相机坐标系中的坐标向量,并通过创建视锥体,获得位于视锥体内的虚拟物体模型,通过视锥体的投影矩阵以及虚拟物体模型在相机坐标系中的坐标向量,可以得到虚拟物体模型的裁剪坐标向量,根据裁剪坐标向量即可以确定位于视锥体内的虚拟物体模型,并可以确定视锥体内的虚拟物体模型距离相机位置的远近,从而将位于视锥体内的虚拟物体模型按照距离相机 位置由远及近的顺序,进行依次渲染,即可以提高渲染效率,还可以使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,提高渲染的显示效果。
图3为本发明提供的一种模型渲染装置一个实施例的结构示意图,该装置具体应用与显示终端中,该装置可以包括:
模型获取模块301,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型。
模型变换模块302,用于将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量。
相机坐标系也即是指眼睛坐标系,是以相机的镜头(或者人的眼睛)来观察物体的视觉空间。
由于虚拟物体模型是在局部坐标系中进行创建的,为了实现虚拟物体的显示,需要将虚拟物体模型转换到相机坐标系中。因此具体的是将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量。
该坐标向量可以对应虚拟物体模型中任一个点的坐标,为了计算准确性,具体可以是对应虚拟物体模型中心点坐标。
虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量具体可以通过矩阵变换进行转换。
视景确定模块303,用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型。
由于相机的视野不是无限远的,因此需要创建一个视锥体,在视锥体内的物体可以被投影到视平面,不在视锥体内的物体将被丢弃不处理。
视锥体可以用矩阵、也即投影矩阵进行表示,因此通过根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体的投影矩阵,即可以获得位于视锥体内的虚拟物体模型。
模型渲染模块304,用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
确定出位于视锥体内的虚拟物体模型,再将位于视锥体内的虚拟物体模型按照距离相机位置由远及近顺序依次进行渲染,也即将虚拟物体模型从相机坐标系投影至二维屏幕上,从而在二维屏幕上渲染出图形,即可以实现虚拟现实场景的显示。
本发明实施例中,将虚拟现实场景中的各个虚拟物体模型通过坐标转换,转换相机坐标系中,然后创建视锥体,仅将位于视锥体内的虚拟物体模型进行渲染,从相机坐标系投影至二维屏幕,渲染出各个虚拟物体,不在视锥体内的虚拟物体模型即舍弃不进行渲染,从而可以提高渲染效率,且各个虚拟物体是按照距离摄像机由远及近的顺序依次进行渲染,从而使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,从而可以提高渲染的显示效果。
其中,作为又一个实施例,如图4所述,所述模型变换模块302可以包括:
模型变换单元401,用于将每一个虚拟物体模型在局部坐标系中的坐标向量与其在世界坐标系中的模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量;
视图变换单元402,用于将每一个虚拟物体模型在所述世界坐标系中的坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量。
由于虚拟现实场景中,虚拟物体模型的创建是在局部坐标系中进行的,因此可以通过矩阵变换转移到世界坐标系,接着通过视图变换转换到相机坐标系。
作为又一实施例,所述模型变换单元可以具体用于:
将每一个虚拟物体模型在世界坐标系中的旋转、平移以及缩放的变换信息,表示为在世界坐标系中的模型矩阵;将每一个虚拟物体模型在 局部坐标系中的坐标向量以及所述模型矩阵,按照下述模型变换公式,经过模型变换,获得在所述世界坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000015
其中,
Figure PCTCN2016088716-appb-000016
为虚拟物体模型在局部坐标系中的坐标向量,
Figure PCTCN2016088716-appb-000017
即为虚拟物体模型在世界坐标系中的坐标向量,Mmodel.transport()表示所述模型矩阵的转置矩阵;Wobj为虚拟物体模型在局部坐标系中的齐次坐标,Wworld为虚拟物体模型在所述世界坐标系中齐次坐标;
所述视图变换单元可以具体用于:
根据相机位置、相机朝向向量以及相机向前向量,获得视图矩阵;
将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标系的视图矩阵,按照下述视频变换公式,经过视图变换,获得在所述相机坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000018
其中,
Figure PCTCN2016088716-appb-000019
表示虚拟物体模型在所述相机坐标系中的坐标向量;
Weye为虚拟物体模型在相机坐标系中的齐次坐标;ViewMatrix.transport()表示所述视图矩阵的转置矩阵。
在进行投影之前,由于相机的视野是有限的,因此需要创建一个视锥体,用于表示相机的视野。视锥体可以以投影矩阵进行表示,将虚拟物体模型在相机坐标系中的坐标向量与投影矩阵,通过投影变换,可以得到在裁剪坐标系中的裁剪坐标向量,利用裁剪坐标向量,即可以确定虚拟物体模型是否位于视锥体内,以及距离相机位置的远近。因此,作为又一个实施例,如图4中所示,该所述视景确定模块303可以包括:
创建单元403,用于创建所述虚拟现实场景的视锥体,并获得所述视锥体的投影矩阵。
通过left(左),right(右),bootom(上),top(下)定义裁剪面大小,zNear(前)和zFar(后)定义相机到远近两个裁剪面的距离。由这六个参数可以定义出六个裁剪面构成的锥体,即为视锥体,也称为视景体。
根据视锥体的六个参数,可以将视锥体用投影矩阵进行表示。
投影变换单元404,用于将每一个虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量。
具体的,可以是将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵按照投影变换公式,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
Figure PCTCN2016088716-appb-000020
其中,
Figure PCTCN2016088716-appb-000021
为裁剪坐标向量,ProjectionMatrix.transport()表示投影矩阵的转置矩阵,(Xclip,Yclip,Zclip,Wclip)为裁剪坐标;Wclip为所 述裁剪坐标中的齐次坐标。
模型确定单元405,用于根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模型。
Wclip即可以表示虚拟物体模型距离视锥体的远近。
如果Wclip为0,则表示对应的虚拟物体模型不在视锥体内。
因此,可以具体根据所述裁剪坐标向量中的齐次坐标,确定齐次坐标非0的虚拟物体模型位于所述视锥体内。
所述模型渲染模块304可以包括:
顺序确定单元406,用于根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序。
Wclip的大小即表示视锥体内各个虚拟物体模型距离相机位置的远近,Wclip的值越大,表示虚拟物体模型距离相机位置越远。
具体的是,是根据所述裁剪坐标向量中的齐次坐标,将位于所述视锥体内的各个虚拟物体模型按照齐次坐标值由大到小的顺序排列,即可以获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序。
模型渲染单元407,用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
也即将位于视锥体内的各个虚拟物体模型,按照齐次坐标值由大到小的排列顺序,依次进行渲染,以显示所述虚拟现实场景。
本发明实施例中,通过将虚拟物体模型进行模型视图变换,获得虚拟物体模型在相机坐标系中的坐标向量,并通过创建视锥体,获得位于视锥体内的虚拟物体模型,通过视锥体的投影矩阵以及虚拟物体模型在相机坐标系中的坐标向量,可以得到虚拟物体模型的裁剪坐标向量,根据裁剪坐标向量即可以确定位于视锥体内的虚拟物体模型,并可以确定视锥体内的虚拟物体模型距离相机位置的远近,从而将位于视锥体内的虚拟物体模型按照距离相机位置由远及近的顺序,进行依次渲染,即可以提高渲染效率,还可以使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,提高渲染的显示效 果。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
图5为本发明提供的一种模型渲染设备一个实施例连接示意图,该设备包括:内存501、处理器502,其中,
所述内存501,用于存储一条或多条指令,其中,所述一条或多条指令以供所述处理器502调用执行;
所述处理器502,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
用于将每一个虚拟物体模型在局部坐标系中的坐标向量转换为在相机坐标系中的坐标向量;
用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
其中,所述处理器502,进一步用于,
将每一个虚拟物体模型在局部坐标系中的坐标向量与模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量;
将每一个虚拟物体模型在所述世界坐标系中的坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量。
其中,所述处理器502,进一步用于,
创建所述虚拟现实场景的视锥体,并获得所述视锥体的投影矩阵;
将每一个虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模型;
进一步用于,根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
其中,所述处理器502,进一步用于,
将每一个虚拟物体模型在世界坐标系中的旋转、平移以及缩放的变换信息,表示为在世界坐标系中的模型矩阵;
将每一个虚拟物体模型在局部坐标系中的坐标向量以及所述模型矩阵,按照下述模型变换公式,经过模型变换,获得在所述世界坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000022
其中,
Figure PCTCN2016088716-appb-000023
为虚拟物体模型在局部坐标系中的坐标向量,
Figure PCTCN2016088716-appb-000024
即为虚拟物体模型在世界坐标系中的坐标向量,Mmodel.transport()表示所述模型矩阵的转置矩阵;Wobj为虚拟物体模型在局部坐标系中的齐次坐标,Wworld为虚拟物体模型在所述世界坐标系中齐次坐标;
所述将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标系的视图矩阵,经过视图变换,获得在所述相机坐标系中的坐标向量包括:
根据相机位置、相机朝向向量以及相机向前向量,获得视图矩阵;
将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标系的视图矩阵,按照下述视频变换公式,经过视图变换,获得在所述相机坐标系中的坐标向量;
Figure PCTCN2016088716-appb-000025
其中,
Figure PCTCN2016088716-appb-000026
表示虚拟物体模型在所述相机坐标系中的坐标向量;
Weye为虚拟物体模型在相机坐标系中的齐次坐标;ViewMatrix.transport()表示所述视图矩阵的转置矩阵。
其中,所述处理器502,进一步用于,
将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵按照投影变换公式,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
Figure PCTCN2016088716-appb-000027
其中,
Figure PCTCN2016088716-appb-000028
为虚拟物体模型的裁剪坐标向量,
ProjectionMatrix.transport()表示投影矩阵的转置矩阵,Wclip为所述裁剪坐标向量中的齐次坐标;
所述根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模 型包括:
根据所述裁剪坐标向量中的齐次坐标,确定齐次坐标非0的虚拟物体模型位于所述视锥体内;
所述根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序包括:
根据所述裁剪坐标向量中的齐次坐标,将位于所述视锥体内的各个虚拟物体模型按照齐次坐标值由大到小的顺序排列,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
所述将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景包括:
将位于视锥体内的各个虚拟物体模型,按照齐次坐标值由大到小的排列顺序,依次进行渲染,以显示所述虚拟现实场景。
可以看到,该设备中的处理器502即是图4中装置一种具体实现,其具体功能和效果也可以参照对图4中装置的说明。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。
工业实用性
本发明实施例提供的模型渲染方法及装置,通过将获取的虚拟现实场景的各个虚拟物体模型转换相机坐标系中,通过创建视锥体,仅将位于视锥体内的虚拟物体模型进行渲染,从而可以提高渲染效率,各个虚拟物体是按照距离摄像机由远及近的顺序依次进行渲染,使得距离摄像机近的虚拟物体模型后进行渲染,不至于被遮挡,从而可以提高渲染的显示效果。

Claims (11)

  1. 一种模型渲染方法,其特征在于,包括:
    获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
    将每一个虚拟物体模型在局部坐标系中的坐标向量转换为在相机坐标系中的坐标向量;
    创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
    将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
  2. 根据权利要求1所述的方法,其特征在于,所述将每一个虚拟物体模型在局部坐标系中的坐标向量转换为相机坐标系中的坐标向量包括:
    将每一个虚拟物体模型在局部坐标系中的坐标向量与模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量;
    将每一个虚拟物体模型在所述世界坐标系中的坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量。
  3. 根据权利要求1或2所述的方法,其特征在于,所述创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型包括:
    创建所述虚拟现实场景的视锥体,并获得所述视锥体的投影矩阵;
    将每一个虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
    根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模型;
    所述将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景包括:
    根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
    将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
  4. 根据权利要求3所述的方法,其特征在于,所述将每一个虚拟物体模型在局部坐标系中的坐标向量与模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量包括:
    将每一个虚拟物体模型在世界坐标系中的旋转、平移以及缩放的变换信息,表示为在世界坐标系中的模型矩阵;
    将每一个虚拟物体模型在局部坐标系中的坐标向量以及所述模型矩阵,按照下述模型变换公式,经过模型变换,获得在所述世界坐标系中的坐标向量;
    Figure PCTCN2016088716-appb-100001
    其中,
    Figure PCTCN2016088716-appb-100002
    为虚拟物体模型在局部坐标系中的坐标向量,
    Figure PCTCN2016088716-appb-100003
    即为虚拟物体模型在世界坐标系中的坐标向量,Mmodel.transport()表示所述模型矩阵的转置矩阵;Wobj为虚拟物体模型在局部坐标系中的齐次坐标,Wworld为虚拟物体模型在所述世界坐标系中齐次坐标;
    所述将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标系的视图矩阵,经过视图变换,获得在所述相机坐标系中的坐标向量包括:
    根据相机位置、相机朝向向量以及相机向前向量,获得视图矩阵;
    将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标 系的视图矩阵,按照下述视频变换公式,经过视图变换,获得在所述相机坐标系中的坐标向量;
    Figure PCTCN2016088716-appb-100004
    其中,
    Figure PCTCN2016088716-appb-100005
    表示虚拟物体模型在所述相机坐标系中的坐标向量;
    Weye为虚拟物体模型在相机坐标系中的齐次坐标;ViewMatrix.transport()表示所述视图矩阵的转置矩阵。
  5. 根据权利要求3所述的方法,其特征在于,所述将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量包括:
    将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵按照投影变换公式,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
    Figure PCTCN2016088716-appb-100006
    其中,
    Figure PCTCN2016088716-appb-100007
    为虚拟物体模型的裁剪坐标向量,
    ProjectionMatrix.transport()表示投影矩阵的转置矩阵,Wclip为所述裁剪坐标向量中的齐次坐标;
    所述根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模 型包括:
    根据所述裁剪坐标向量中的齐次坐标,确定齐次坐标非0的虚拟物体模型位于所述视锥体内;
    所述根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序包括:
    根据所述裁剪坐标向量中的齐次坐标,将位于所述视锥体内的各个虚拟物体模型按照齐次坐标值由大到小的顺序排列,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
    所述将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景包括:
    将位于视锥体内的各个虚拟物体模型,按照齐次坐标值由大到小的排列顺序,依次进行渲染,以显示所述虚拟现实场景。
  6. 一种模型渲染装置,其特征在于,包括:
    模型获取模块,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
    模型变换模块,用于将每一个虚拟物体模型在局部坐标系中坐标向量转换为在相机坐标系中的坐标向量;
    视景确定模块,用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
    模型渲染模块,用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
  7. 根据权利要求6所述的装置,其特征在于,所述模型变换模块包括:
    模型变换单元,用于将每一个虚拟物体模型在局部坐标系中的坐标向量与模型矩阵,经过模型变换,获得在所述世界坐标系中的坐标向量;
    视图变换单元,用于将每一个虚拟物体模型在所述世界坐标系中的 坐标向量与视图矩阵,经过视图变换,获得在相机坐标系中的坐标向量。
  8. 根据权利要求6或7所述的装置,其特征在于,所述视景确定模块包括:
    创建单元,用于创建所述虚拟现实场景的视锥体,并获得所述视锥体的投影矩阵;
    投影变换单元,用于将每一个虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
    模型确定单元,用于根据所述裁剪坐标向量,获得位于所述视锥体内的虚拟物体模型;
    所述模型渲染模块包括:
    顺序确定单元,用于根据所述裁剪坐标向量,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
    模型渲染单元,用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
  9. 根据权利要求8所述的装置,其特征在于,所述模型变换单元具体用于:
    将每一个虚拟物体模型在世界坐标系中的旋转、平移以及缩放的变换信息,表示为在世界坐标系中的模型矩阵;将每一个虚拟物体模型在局部坐标系中的坐标向量以及所述模型矩阵,按照下述模型变换公式,经过模型变换,获得在所述世界坐标系中的坐标向量;
    Figure PCTCN2016088716-appb-100008
    其中,
    Figure PCTCN2016088716-appb-100009
    为虚拟物体模型在局部坐标系中的坐标向量,
    Figure PCTCN2016088716-appb-100010
    即为虚拟物体模型在世界坐标系中的坐标向量,Mmodel.transport()表示所述模型矩阵的转置矩阵;Wobj为虚拟物体模型在局部坐标系中的齐次坐标,Wworld为虚拟物体模型在所述世界坐标系中齐次坐标;
    所述视图变换单元具体用于:
    根据相机位置、相机朝向向量以及相机向前向量,获得视图矩阵;
    将每一个虚拟物体模型在所述世界坐标系中的坐标向量与相机坐标系的视图矩阵,按照下述视频变换公式,经过视图变换,获得在所述相机坐标系中的坐标向量;
    Figure PCTCN2016088716-appb-100011
    其中,
    Figure PCTCN2016088716-appb-100012
    表示虚拟物体模型在所述相机坐标系中的坐标向量;
    Weye为虚拟物体模型在相机坐标系中的齐次坐标;ViewMatrix.transport()表示所述视图矩阵的转置矩阵。
  10. 根据权利要求8所述的装置,其特征在于,所述投影变换单元具体用于:
    将所述虚拟物体模型在相机坐标系中坐标向量与所述投影矩阵按照投影变换公式,进行投影变换,获得所述虚拟物体模型的裁剪坐标向量;
    Figure PCTCN2016088716-appb-100013
    其中,
    Figure PCTCN2016088716-appb-100014
    为虚拟物体模型的裁剪坐标向量,
    ProjectionMatrix.transport()表示投影矩阵的转置矩阵,Wclip为所述裁剪坐标向量中的齐次坐标;
    所述模型确定单元具体用于:
    根据所述裁剪坐标向量中的齐次坐标,确定齐次坐标非0的虚拟物体模型位于所述视锥体内;
    顺序确定单元具体用于:
    根据所述裁剪坐标向量中的齐次坐标,将位于所述视锥体内的各个虚拟物体模型按照齐次坐标值由大到小的顺序排列,获得位于所述视锥体内的各个虚拟物体模型距离相机位置的由远及近的顺序;
    模型渲染单元具体用于:
    将位于视锥体内的各个虚拟物体模型,按照齐次坐标值由大到小的排列顺序,依次进行渲染,以显示所述虚拟现实场景。
  11. 一种模型渲染设备,其特征在于,包括:内存、处理器,其中,
    所述内存,用于存储一条或多条指令,其中,所述一条或多条指令以供所述处理器调用执行;
    所述处理器,用于获取为虚拟现实场景创建的各个虚拟物体的虚拟物体模型;
    用于将每一个虚拟物体模型在局部坐标系中的坐标向量转换为在相机坐标系中的坐标向量;
    用于创建所述虚拟现实场景的视锥体,根据各个虚拟物体模型在相机坐标系中的坐标向量以及所述视锥体,获得将位于所述视锥体内的各个虚拟物体模型;
    用于将位于所述视锥体内的各个虚拟物体模型,按照距离相机位置 由远及近顺序依次进行渲染,以显示所述虚拟现实场景。
PCT/CN2016/088716 2015-12-01 2016-07-05 模型渲染方法及装置 WO2017092307A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/247,509 US20170154469A1 (en) 2015-12-01 2016-08-25 Method and Device for Model Rendering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510870852.0 2015-12-01
CN201510870852.0A CN105894566A (zh) 2015-12-01 2015-12-01 模型渲染方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/247,509 Continuation US20170154469A1 (en) 2015-12-01 2016-08-25 Method and Device for Model Rendering

Publications (1)

Publication Number Publication Date
WO2017092307A1 true WO2017092307A1 (zh) 2017-06-08

Family

ID=57002586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088716 WO2017092307A1 (zh) 2015-12-01 2016-07-05 模型渲染方法及装置

Country Status (2)

Country Link
CN (1) CN105894566A (zh)
WO (1) WO2017092307A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766744A (zh) * 2019-11-05 2020-02-07 北京华捷艾米科技有限公司 一种基于3d深度摄像头的mr体积测量方法及装置
CN110796742A (zh) * 2019-10-25 2020-02-14 西安建筑科技大学 一种基于面向对象的三维场景视锥体剔除方法
CN111383262A (zh) * 2018-12-28 2020-07-07 曜科智能科技(上海)有限公司 遮挡检测方法、系统、电子终端以及存储介质
CN112035934A (zh) * 2020-09-04 2020-12-04 国网山西省电力公司经济技术研究院 一种基于变电站的数字化设计模型进行施工管理控制的方法
CN112437276A (zh) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 一种基于WebGL的三维视频融合方法及系统
CN113476835A (zh) * 2020-10-22 2021-10-08 青岛海信电子产业控股股份有限公司 一种画面显示的方法及装置
CN115019019A (zh) * 2022-06-01 2022-09-06 大连东软信息学院 一种实现3d特效编辑器的方法
CN115546377A (zh) * 2022-12-01 2022-12-30 杭州靖安科技有限公司 一种视频融合方法、装置、电子设备及存储介质
CN116271720A (zh) * 2023-02-21 2023-06-23 中国人民解放军西部战区总医院 一种基于虚拟现实技术的手功能训练系统
CN116524157A (zh) * 2023-04-28 2023-08-01 北京优酷科技有限公司 扩展现实合成方法、装置、电子设备及存储介质
CN116757005A (zh) * 2023-08-21 2023-09-15 中国兵器装备集团兵器装备研究所 仿真系统更新作战单元方向向量的方法、装置和存储介质
CN113476835B (zh) * 2020-10-22 2024-06-07 海信集团控股股份有限公司 一种画面显示的方法及装置

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600703A (zh) * 2016-11-22 2017-04-26 宇龙计算机通信科技(深圳)有限公司 基于虚拟现实设备的图像处理方法及装置
CN106582012B (zh) * 2016-12-07 2018-12-11 腾讯科技(深圳)有限公司 一种vr场景下的攀爬操作处理方法和装置
CN107517372B (zh) 2017-08-17 2022-07-26 腾讯科技(深圳)有限公司 一种vr内容拍摄方法、相关设备及系统
CN108010117B (zh) * 2017-11-30 2021-09-28 杭州电魂网络科技股份有限公司 全屏渲染方法及装置
CN108171784B (zh) * 2017-12-22 2021-06-01 福建省天奕网络科技有限公司 一种渲染方法及终端
CN108434742B (zh) 2018-02-02 2019-04-30 网易(杭州)网络有限公司 游戏场景中虚拟资源的处理方法和装置
CN108346181B (zh) * 2018-03-30 2022-04-05 中煤科工集团重庆研究院有限公司 矿井三维全巷道虚化及设施设备显示方法
CN110162258A (zh) 2018-07-03 2019-08-23 腾讯数码(天津)有限公司 个性化场景图像的处理方法及装置
CN109214062B (zh) * 2018-08-13 2022-08-09 苏州蜗牛数字科技股份有限公司 基于图像显示的虚拟车辆模型及虚拟车辆模型显示方法
CN109614717A (zh) * 2018-12-14 2019-04-12 北京惠佳家品科技有限公司 一种3d虚拟家装场景加载方法及系统
CN109829981B (zh) * 2019-02-16 2023-06-27 深圳市未来感知科技有限公司 三维场景呈现方法、装置、设备及存储介质
CN111784810B (zh) * 2019-04-04 2023-12-29 网易(杭州)网络有限公司 虚拟地图显示方法及装置、存储介质、电子设备
CN110223589A (zh) * 2019-05-17 2019-09-10 上海蜂雀网络科技有限公司 一种基于3d绘画协议的汽车模型展示方法
CN112750188B (zh) * 2019-10-29 2023-11-24 福建天晴数码有限公司 一种自动渲染物件的方法及终端
CN111127611B (zh) * 2019-12-24 2023-09-22 北京像素软件科技股份有限公司 三维场景渲染方法、装置及电子设备
CN111080762B (zh) * 2019-12-26 2024-02-23 北京像素软件科技股份有限公司 虚拟模型渲染方法及装置
CN112150602A (zh) * 2020-09-24 2020-12-29 苏州幻塔网络科技有限公司 模型图像的渲染方法和装置、存储介质和电子设备
CN115103134B (zh) * 2022-06-17 2023-02-17 北京中科深智科技有限公司 一种led虚拟拍摄切割合成方法
CN116433769A (zh) * 2023-04-21 2023-07-14 北京优酷科技有限公司 空间校准方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09114994A (ja) * 1995-10-19 1997-05-02 Sony Corp 画像作成装置および方法
US20040095385A1 (en) * 2002-11-18 2004-05-20 Bon-Ki Koo System and method for embodying virtual reality
CN102368335A (zh) * 2011-11-01 2012-03-07 深圳市五巨科技有限公司 一种改进的用广告板技术绘制场景模型的方法
CN102523473A (zh) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 一种三维界面显示装置、方法及终端
CN102646284A (zh) * 2012-04-11 2012-08-22 Tcl集团股份有限公司 一种3d渲染系统中透明物体的渲染顺序获取方法及系统
CN104346825A (zh) * 2014-10-31 2015-02-11 无锡梵天信息技术股份有限公司 一种非线性深度转化为线性深度的处理方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09114994A (ja) * 1995-10-19 1997-05-02 Sony Corp 画像作成装置および方法
US20040095385A1 (en) * 2002-11-18 2004-05-20 Bon-Ki Koo System and method for embodying virtual reality
CN102368335A (zh) * 2011-11-01 2012-03-07 深圳市五巨科技有限公司 一种改进的用广告板技术绘制场景模型的方法
CN102523473A (zh) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 一种三维界面显示装置、方法及终端
CN102646284A (zh) * 2012-04-11 2012-08-22 Tcl集团股份有限公司 一种3d渲染系统中透明物体的渲染顺序获取方法及系统
CN104346825A (zh) * 2014-10-31 2015-02-11 无锡梵天信息技术股份有限公司 一种非线性深度转化为线性深度的处理方法和装置

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383262B (zh) * 2018-12-28 2023-05-12 曜科智能科技(上海)有限公司 遮挡检测方法、系统、电子终端以及存储介质
CN111383262A (zh) * 2018-12-28 2020-07-07 曜科智能科技(上海)有限公司 遮挡检测方法、系统、电子终端以及存储介质
CN110796742A (zh) * 2019-10-25 2020-02-14 西安建筑科技大学 一种基于面向对象的三维场景视锥体剔除方法
CN110796742B (zh) * 2019-10-25 2023-03-14 西安建筑科技大学 一种基于面向对象的三维场景视锥体剔除方法
CN110766744A (zh) * 2019-11-05 2020-02-07 北京华捷艾米科技有限公司 一种基于3d深度摄像头的mr体积测量方法及装置
CN112035934A (zh) * 2020-09-04 2020-12-04 国网山西省电力公司经济技术研究院 一种基于变电站的数字化设计模型进行施工管理控制的方法
CN112035934B (zh) * 2020-09-04 2024-05-10 国网山西省电力公司经济技术研究院 一种基于变电站的数字化设计模型进行施工管理控制的方法
CN113476835A (zh) * 2020-10-22 2021-10-08 青岛海信电子产业控股股份有限公司 一种画面显示的方法及装置
CN113476835B (zh) * 2020-10-22 2024-06-07 海信集团控股股份有限公司 一种画面显示的方法及装置
CN112437276A (zh) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 一种基于WebGL的三维视频融合方法及系统
CN112437276B (zh) * 2020-11-20 2023-04-07 埃洛克航空科技(北京)有限公司 一种基于WebGL的三维视频融合方法及系统
CN115019019B (zh) * 2022-06-01 2024-04-30 大连东软信息学院 一种实现3d特效编辑器的方法
CN115019019A (zh) * 2022-06-01 2022-09-06 大连东软信息学院 一种实现3d特效编辑器的方法
CN115546377A (zh) * 2022-12-01 2022-12-30 杭州靖安科技有限公司 一种视频融合方法、装置、电子设备及存储介质
CN116271720A (zh) * 2023-02-21 2023-06-23 中国人民解放军西部战区总医院 一种基于虚拟现实技术的手功能训练系统
CN116524157A (zh) * 2023-04-28 2023-08-01 北京优酷科技有限公司 扩展现实合成方法、装置、电子设备及存储介质
CN116524157B (zh) * 2023-04-28 2024-05-14 神力视界(深圳)文化科技有限公司 扩展现实合成方法、装置、电子设备及存储介质
CN116757005A (zh) * 2023-08-21 2023-09-15 中国兵器装备集团兵器装备研究所 仿真系统更新作战单元方向向量的方法、装置和存储介质
CN116757005B (zh) * 2023-08-21 2023-11-10 中国兵器装备集团兵器装备研究所 仿真系统更新作战单元方向向量的方法、装置和存储介质

Also Published As

Publication number Publication date
CN105894566A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017092307A1 (zh) 模型渲染方法及装置
KR100721536B1 (ko) 2차원 평면상에서 실루엣 정보를 이용한 3차원 구조 복원방법
WO2017092303A1 (zh) 虚拟现实场景模型建立方法及装置
CN109829981B (zh) 三维场景呈现方法、装置、设备及存储介质
JP2020507850A (ja) 画像内の物体の姿の確定方法、装置、設備及び記憶媒体
JP7387202B2 (ja) 3次元顔モデル生成方法、装置、コンピュータデバイス及びコンピュータプログラム
US9286539B2 (en) Constructing contours from imagery
US9704282B1 (en) Texture blending between view-dependent texture and base texture in a geographic information system
KR20140100656A (ko) 전방향 영상 및 3차원 데이터를 이용한 시점 영상 제공 장치 및 방법
WO2013177459A1 (en) Systems and methods for rendering virtual try-on products
KR101851303B1 (ko) 3차원 공간 재구성 장치 및 방법
WO2020186385A1 (zh) 图像处理方法、电子设备及计算机可读存储介质
WO2023015409A1 (zh) 物体姿态的检测方法、装置、计算机设备和存储介质
CN115690382B (zh) 深度学习模型的训练方法、生成全景图的方法和装置
CN114863037A (zh) 基于单手机的人体三维建模数据采集与重建方法及系统
US20220358735A1 (en) Method for processing image, device and storage medium
WO2023093739A1 (zh) 一种多视图三维重建的方法
CN112929651A (zh) 一种显示方法、装置、电子设备及存储介质
JP2023172893A (ja) 対象物の双方向な三次元表現の制御方法、制御装置及び記録媒体
CN116917949A (zh) 根据单目相机输出来对对象进行建模
CN113470112A (zh) 图像处理方法、装置、存储介质以及终端
JP6223916B2 (ja) 情報処理装置、方法及びプログラム
WO2019042028A1 (zh) 全视向的球体光场渲染方法
CN108256477B (zh) 一种用于检测人脸的方法和装置
CN113920282B (zh) 图像处理方法和装置、计算机可读存储介质、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869620

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869620

Country of ref document: EP

Kind code of ref document: A1