WO2015196791A1 - Binocular three-dimensional graphic rendering method and related system - Google Patents

Binocular three-dimensional graphic rendering method and related system Download PDF

Info

Publication number
WO2015196791A1
WO2015196791A1 PCT/CN2015/070601 CN2015070601W WO2015196791A1 WO 2015196791 A1 WO2015196791 A1 WO 2015196791A1 CN 2015070601 W CN2015070601 W CN 2015070601W WO 2015196791 A1 WO2015196791 A1 WO 2015196791A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
stereoscopic
plane
view
view frame
Prior art date
Application number
PCT/CN2015/070601
Other languages
French (fr)
Chinese (zh)
Inventor
王文敏
张建龙
王荣刚
董胜富
王振宇
李英
高文
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Publication of WO2015196791A1 publication Critical patent/WO2015196791A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present invention relates to the field of stereoscopic vision processing technologies, and in particular, to a binocular three-dimensional (Stereoscopic 3D) graphics rendering method and related system.
  • stereoscopic vision processing technologies and in particular, to a binocular three-dimensional (Stereoscopic 3D) graphics rendering method and related system.
  • the real world is a three-dimensional three-dimensional world.
  • the human eye is watching the three-dimensional world, since the eyes are horizontally separated at two different positions, the images of the objects seen are different.
  • the visual angles of the images seen by the left and right eyes are different, and are called left view and right view, respectively.
  • the 3D display is designed based on the principle of binocular parallax.
  • the 3D graphics rendering pipeline is responsible for performing a series of necessary steps to convert the 3D scene into a 2D image that can be displayed on the display.
  • the 3D graphics rendering pipeline typically includes the following steps: conversion from a local coordinate system to a world coordinate system; conversion from a world coordinate system to a view coordinate system; projection transformation; and viewport transformation.
  • a popular graphics API Application Programming Interface
  • OpenGL Open Graphic Library
  • OpenGL can be used for monocular rendering. It is a cross-platform, cross-programming API suitable for rendering stereoscopic 3D graphics on traditional 2D displays. OpenGL is usually supported on the GPU hardware platform. The GPU rendering pipeline is a hardware acceleration and efficient process for converting 3D information into 2D images. OpenGL also provides an application programming interface for binocular 3D rendering, but requires corresponding support on the GPU hardware, otherwise OpenGL will not be able to present binocular 3D effects on 3D displays.
  • the stereoscopic effect mainly includes two types of screen entry and screen release.
  • the screen entry means that the object seen seems to be behind the screen, and the screen appears to be seen.
  • the effect of the object is like the front of the screen.
  • the screen can give a sense of fire to the human sensory effect.
  • the viewpoint is at the origin and observed along the -z direction, forming a pyramid-shaped frustum, that is, two far and near, parallel to each other. A cone formed by truncation of planes (called far and near planes).
  • any primitives outside the cone will be cropped, and the primitives left in the cone will undergo a perspective transformation. Shadowing onto the near plane, the pseudo depth obtained by the perspective transformation is used as the basis for judging whether the pixel is visible or not. If you simply use the viewpoint displacement and depth information, only the primitives behind the near plane will be projected onto the near plane, which can only achieve the stereo effect of entering the screen.
  • the present invention provides a binocular three-dimensional graphics rendering method, including a projection transformation step, the projection transformation step comprising: adding a midplane as a projection plane between a near plane and a far plane, and a near plane The primitives between the far planes are projected onto the midplane.
  • the present invention provides a stereoscopic image reproduction method, including:
  • Creating step creating at least two view frame buffers for respectively storing image data of different viewpoints;
  • a rendering step receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering comprising storing the rendering result into a corresponding video frame buffer using the binocular three-dimensional graphics rendering method as described above Area;
  • the synthesizing step is: synthesizing the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frames.
  • the present invention provides a stereoscopic image reproduction system, including:
  • a rendering module configured to receive data of a three-dimensional graphic of at least two viewpoints, and render data of each view, the rendering comprising storing the rendering result into a corresponding view by using a binocular three-dimensional graphics rendering method as described above Frame buffer
  • a synthesizing module configured to synthesize the rendering result in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
  • the present invention provides a binocular three-dimensional graphics rendering and display system, comprising:
  • a storage device for saving a data file containing a three-dimensional graphic
  • a processor configured to parse the data file in the storage device
  • Processor memory for providing at least two view frame buffers for respectively storing data of different viewpoints
  • a graphics processor configured to implement three-dimensional graphics rendering on the processed data file by the processor, where the rendering includes generating a view frame of different viewpoints by using a binocular three-dimensional graphics rendering method as described above;
  • the processor memory is further configured to store a view frame of different views generated by the graphics processor
  • the processor is further configured to synthesize the view frames of the different views to obtain a stereo frame
  • a three-dimensional display for displaying the stereoscopic frame is a three-dimensional display for displaying the stereoscopic frame.
  • the binocular three-dimensional graphics rendering method and related system according to the present invention are increased by In the middle plane, the primitive between the near plane and the far plane is projected onto the midplane, so that the primitive between the near plane and the middle plane has a stereoscopic effect, and the primitive between the middle plane and the far plane will have The stereo effect of entering the screen; thus, the existing "output screen” and “on screen” effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
  • 1 is a three-dimensional schematic diagram of a projection transformation stage extension in a GPU rendering pipeline
  • Figure 2 is a schematic diagram of the principle of binocular parallax
  • FIG. 3 is a schematic structural diagram of a binocular three-dimensional graphics rendering and display system according to an embodiment of the present invention.
  • FIG. 4 is a graphics processing pipeline of a GPU in an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of a method for reproducing a stereo image according to an embodiment of the present invention.
  • the left-view frame refers to a two-dimensional view indicating the left eye of the viewer in the three-dimensional image display; similarly, the right-view frame refers to a two-dimensional view indicating the right eye of the viewer in the three-dimensional image display.
  • a stereoscopic frame refers to a 3D view frame obtained by combining a rendered left view frame and a right view frame according to the type of the 3D display.
  • the OpenGL rendering pipeline is known to convert 3D information into 2D images in real time and efficiently.
  • the OpenGL rendering pipeline used by the existing GPU cannot simultaneously consider the screen entry and the screen when rendering, and can realize the 3D rendering based on binocular parallax without the GPU hardware support.
  • a binocular 3D graphics rendering method in this embodiment adds a midplane. (middle plane), as shown in FIG. 1, the middle plane is between the near plane and the far plane, and the middle plane is regarded as a projection plane, and the primitive between the near plane and the far plane is projected onto the middle plane, and then The primitive between the plane and the mid-plane has a three-dimensional effect of the screen, and the primitive between the middle plane and the far plane has a stereo effect of entering the screen.
  • the pseudo depth must remain unchanged, ie between [-1, 1].
  • the correction is related to the displacement, and the displacement amount is represented by s, and the correction matrix M 3 is as follows:
  • the matrix of the projection transformation stage can be represented by the matrix M PT as follows:
  • the buffers for storing the left and right eye views are the left view frame and the right view frame, respectively, and the object objects in the left and right view frames are related to the depth of field.
  • the present embodiment provides a binocular three-dimensional graphics rendering and display system based on the binocular parallax principle and the GPU rendering pipeline.
  • FIG. 3 it is a schematic structural diagram of the system, and the system includes five modules:
  • the external storage device 200 is configured to store scene data, such as 3D grid data, image data, configuration data, and the like;
  • a processor (CPU) 201 for parsing files, processing of scene data, and synthesizing operations of stereoscopic frames;
  • GPU 202 implements the main components of the graphics rendering pipeline to generate left and right views
  • processor memory 204 storing left and right view frame buffers and program data
  • 3D display 203 for displaying stereoscopic frames.
  • the CPU parses the scene data from the file of the external memory, stores it in the processor memory, and selectively sends the scene data to the GPU hardware through a CPU or DMA (Direct Memory Access) according to the rendering command.
  • the 3D rendering of the scene is done through the GPU's rendering pipeline.
  • the GPU needs to perform left and right view frame rendering on the scene data. After each rendering, the data in the frame buffer needs to be transferred from the GPU to the processor memory.
  • the left and right view frames in the processor memory are combined into a binocular 3D view frame, and then the 3D view frame is transmitted to the frame buffer of the GPU, and finally displayed on the 3D display. .
  • the vertex data 300 stored in the memory is called object coordinates.
  • the transformation of the object view coordinates into viewpoint coordinates is performed through the transformation of the model view matrix 301.
  • the origin of the viewpoint coordinates is the position of the camera.
  • the viewpoint coordinates are transformed by the transformation matrix 302 stage, including three steps of perspective, displacement and scaling transformation, and the coordinate control is transformed into a cube between [-1, 1].
  • the coordinates obtained at this time are called the clipping coordinates, and the z component of the coordinates. Called pseudo depth.
  • plane shading or smooth shading is also performed. If the lighting and texture are added, the calculation of the pixel values between the vertex is also implemented.
  • GPU provides a dedicated data channel.
  • the 3D information can be converted into a 2D image in real time and efficiently, and stored in a frame buffer in the GPU for display of the display.
  • the projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
  • a stereoscopic image reproduction method including:
  • Creating step creating at least two view frame buffers for respectively storing image data of different viewpoints;
  • a rendering step receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering is using the binocular three-dimensional graphics rendering method of Embodiment 1, and storing the rendering result in a corresponding view frame buffer ;
  • Synthesizing step synthesizing the rendering results in at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frame.
  • the stereoscopic image reproduction method of this embodiment First read the attribute information of the 3D display, including the length, width, resolution, type of the display (such as side by side or side by side, etc.) and other information; initialize the buffer of the left and right viewpoints in the processor memory, and clear the two buffers; Set the left viewpoint in the GPU rendering pipeline, clear the two buffers; set the projection matrix and model view matrix of the left viewpoint in the GPU rendering pipeline; render the entire scene, store the result in the left buffer; set the right viewpoint in the GPU rendering pipeline Projection matrix and model view matrix; render the entire scene, store the rendering result into the right buffer; according to the 3D display type, format or synthesize the left and right frames, and then copy to the frame buffer; finally, binocular 3D The view frame is rendered onto the 3D display.
  • the projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
  • the present invention further provides a stereoscopic image reproduction system, including:
  • a rendering module configured to receive data of the three-dimensional graphics of the at least two viewpoints, and render the data of each of the viewpoints, where the rendering is performed by using the binocular three-dimensional graphics rendering method of the foregoing embodiment, and the rendering result is stored in the corresponding view.
  • the synthesizing module is configured to synthesize the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
  • the present invention relates to a binocular 3D graphics rendering method and system thereof, which is based on a GPU rendering pipeline and a binocular parallax principle, and provides an OpenGL-compatible binocular 3D rendering method and system capable of shutter, polarization, naked eye, and the like.
  • the 3D display shows stereoscopic 3D effects and is compatible with special effects rendering algorithms in traditional graphics, such as particle systems, texture shading, and shadows.
  • the binocular 3D graphics obtained by the method can present a stereoscopic 3D world with depth of field and layering as well as 3D video.
  • the invention presents a stereoscopic 3D effect through the 3D display, and solves the problem of how to render the stereoscopic 3D effect when the GPU hardware does not support the OpenGL binocular 3D rendering API.
  • the method adopted is based on the existing GPU rendering pipeline, fully utilizing the hardware acceleration characteristics of the pipeline, improving processing efficiency, compatible with most application programming interfaces of OpenGL, and real-time rendering stereoscopic 3D scenes using the existing OpenGL application programming interface. .
  • Based on the binocular parallax principle different left and right viewing frames are generated according to the parallax of the left and right eyes and the depth of the object, and finally a binocular 3D viewing frame is synthesized according to the type of the 3D display.
  • the binocular 3D rendering method and system provided by the present invention require two left and right viewing frames (required for multi-view situations) Multiple view frames), two view frame buffers can be created in the processor memory, one for storing the left view and the other for storing the right view; and then synthesizing according to the stereoscopic 3D view frame format of the 3D display, the method It is also compatible with most of the special effects rendering algorithms in traditional graphics.
  • the model view matrix and the projection matrix are adjusted each time the rendering is performed, and the results of the two renderings are respectively saved into the left and right frame buffers, that is, Copy the data in the GPU frame buffer to the view frame buffer in the processor memory.
  • the left and right view frames are combined into a binocular 3D view frame according to the type of the 3D display; finally, the 3D view frame is copied to the frame buffer on the GPU and presented on the 3D display.
  • the rendering process involved includes: setting the model view matrix and projection matrix of the left viewpoint, rendering the scene data, and the frame buffer Copy the data into the left view frame buffer of the processor memory; set the model view matrix and projection matrix of the right view, render the scene data, and copy the data in the frame buffer to the right view frame buffer of the processor memory;
  • the type of 3D display combines two frames of data into a binocular 3D view frame, which is sent to the frame buffer on the GPU; finally, it is rendered onto the 3D display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a binocular three-dimensional graphic rendering method and a related system. The method comprises a projection transformation step. The projection transformation step comprises: adding a middle plane between a near plane and a far plane as a projection plane, and projecting primitives between the near plane and the far plane onto the middle plane. The primitives between the near plane and the far plane are projected onto the middle plane through the added middle plane, so the primitives between the near plane and the middle plane achieve a stereo effect of out of the screen, and the primitives between the middle plane and the far plane achieve a stereo effect of into the screen. Therefore, no specific hardware is required for rendering the effects of "out of the screen" and "into the screen" when the existing rendering pipelines are used in a 3D display device.

Description

双目三维图形渲染方法及相关系统Binocular 3D graphics rendering method and related system 技术领域Technical field
本发明涉及立体视觉处理技术领域,具体涉及一种双目三维(Stereoscopic 3D)图形渲染方法及相关系统。The present invention relates to the field of stereoscopic vision processing technologies, and in particular, to a binocular three-dimensional (Stereoscopic 3D) graphics rendering method and related system.
背景技术Background technique
众所周知现实世界是立体三维世界,人类的眼睛在观看三维世界时,由于双眼水平分开在两个不同的位置上,所看到的物体图像是不同的。左右眼看到的图像的视觉角度不同,分别称为左视图和右视。由于视差的存在,通过人类大脑的汇聚,可以使人感受到具有景深和层次感的立体3D世界,这就是双目视差原理。根据这一原理,如果能够让双眼分别看到两幅不同视觉角度的视图,就可以使人感受到一个具有景深和层次感的立体3D视图。It is well known that the real world is a three-dimensional three-dimensional world. When the human eye is watching the three-dimensional world, since the eyes are horizontally separated at two different positions, the images of the objects seen are different. The visual angles of the images seen by the left and right eyes are different, and are called left view and right view, respectively. Due to the existence of parallax, through the convergence of the human brain, people can feel the stereoscopic 3D world with depth of field and layering. This is the principle of binocular parallax. According to this principle, if you can see two different views of the different angles of view, you can feel a stereoscopic 3D view with depth of field and layering.
3D显示器就是基于双目视差原理而设计。当要产生实时的立体3D图像时,需要实时地产生左右眼的图像,使观看者持续感受到立体感。这一过程通常需要修改渲染管线(Rendering Pipeline)来完成。3D图形渲染管线负责执行一系列必要的步骤从而把3D场景转换为可以在显示器上显示的2D图像。3D图形渲染管线通常大致包括以下步骤:从局部坐标系转换到世界坐标系;从世界坐标系转换到视图坐标系;投影变换;和视区(viewport)变换。目前一种比较流行的图形API(Application Programming Interface,应用程序接口)暨OpenGL(Open Graphic Library,开放性图形库)有自己的渲染管线。The 3D display is designed based on the principle of binocular parallax. When real-time stereoscopic 3D images are to be generated, it is necessary to generate images of the left and right eyes in real time, so that the viewer continues to feel the stereoscopic effect. This process usually needs to be done by modifying the Rendering Pipeline. The 3D graphics rendering pipeline is responsible for performing a series of necessary steps to convert the 3D scene into a 2D image that can be displayed on the display. The 3D graphics rendering pipeline typically includes the following steps: conversion from a local coordinate system to a world coordinate system; conversion from a world coordinate system to a view coordinate system; projection transformation; and viewport transformation. At present, a popular graphics API (Application Programming Interface) and OpenGL (Open Graphic Library) have their own rendering pipeline.
OpenGL可以实现单目渲染,它是一种跨平台、跨编程语言的API,适合用于在传统的2D显示器上渲染具有立体感的3D图形。GPU硬件平台上通常可以支持OpenGL,GPU渲染管线是一种硬件加速、高效地将三维信息转化成二维图像的处理流程。OpenGL也提供了双目3D渲染的应用编程接口,但是要求GPU硬件上有相应的支持,否则无法使OpenGL在3D显示器上呈现出双目3D效果。OpenGL can be used for monocular rendering. It is a cross-platform, cross-programming API suitable for rendering stereoscopic 3D graphics on traditional 2D displays. OpenGL is usually supported on the GPU hardware platform. The GPU rendering pipeline is a hardware acceleration and efficient process for converting 3D information into 2D images. OpenGL also provides an application programming interface for binocular 3D rendering, but requires corresponding support on the GPU hardware, otherwise OpenGL will not be able to present binocular 3D effects on 3D displays.
此外,基于现有的GPU(Graphic Processing Unit,图形处理器)渲染管线,立体效果主要有入屏和出屏两种,入屏是指看到的物体好像在屏幕的后面,出屏是指看到的物体效果好像在屏幕的前面。例如在渲染火焰喷射效果时,出屏能给人一种火焰烧到人的感官效果。在常规的GPU渲染的投影变换阶段,例如采用OpenGL进行渲染,视点在原点,沿着-z方向观察,会形成一个金字塔形的平截头体,即由两个一远一近、相互平行的平面(称为远平面和近平面)截断而形成的一个锥体。任何在锥体外的图元都会被裁剪掉,留在锥体内的图元会经过透视变换,投 影到近平面上,透视变换得到的伪深度作为像素是否可见的判断依据。若简单的利用视点位移和深度信息,只会将近平面后面的图元投影到近平面上,只能实现入屏的立体效果。In addition, based on the existing GPU (Graphic Processing Unit) rendering pipeline, the stereoscopic effect mainly includes two types of screen entry and screen release. The screen entry means that the object seen seems to be behind the screen, and the screen appears to be seen. The effect of the object is like the front of the screen. For example, when rendering a flame spray effect, the screen can give a sense of fire to the human sensory effect. In the stage of projection transformation of conventional GPU rendering, for example, using OpenGL for rendering, the viewpoint is at the origin and observed along the -z direction, forming a pyramid-shaped frustum, that is, two far and near, parallel to each other. A cone formed by truncation of planes (called far and near planes). Any primitives outside the cone will be cropped, and the primitives left in the cone will undergo a perspective transformation. Shadowing onto the near plane, the pseudo depth obtained by the perspective transformation is used as the basis for judging whether the pixel is visible or not. If you simply use the viewpoint displacement and depth information, only the primitives behind the near plane will be projected onto the near plane, which can only achieve the stereo effect of entering the screen.
发明内容Summary of the invention
根据本发明的第一方面,本发明提供一种双目三维图形渲染方法,包括投影变换步骤,所述投影变换步骤包括:在近平面和远平面之间增加中平面作为投影面,将近平面与远平面之间的图元投影到中平面上。According to a first aspect of the present invention, the present invention provides a binocular three-dimensional graphics rendering method, including a projection transformation step, the projection transformation step comprising: adding a midplane as a projection plane between a near plane and a far plane, and a near plane The primitives between the far planes are projected onto the midplane.
根据本发明的第二方面,本发明提供一种立体影像重现方法,包括:According to a second aspect of the present invention, the present invention provides a stereoscopic image reproduction method, including:
创建步骤:创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区;Creating step: creating at least two view frame buffers for respectively storing image data of different viewpoints;
渲染步骤:接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,所述渲染包括使用如上所述的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering step: receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering comprising storing the rendering result into a corresponding video frame buffer using the binocular three-dimensional graphics rendering method as described above Area;
合成步骤:将所述至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出所述立体帧。The synthesizing step is: synthesizing the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frames.
根据本发明的第三方面,本发明提供一种立体影像重现系统,包括:According to a third aspect of the present invention, the present invention provides a stereoscopic image reproduction system, including:
创建模块,用于创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区;Creating a module for creating at least two view frame buffers for respectively storing image data of different viewpoints;
渲染模块,用于接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,所述渲染包括使用如上所述的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering module, configured to receive data of a three-dimensional graphic of at least two viewpoints, and render data of each view, the rendering comprising storing the rendering result into a corresponding view by using a binocular three-dimensional graphics rendering method as described above Frame buffer
合成模块,用于将所述至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出所述立体帧。And a synthesizing module, configured to synthesize the rendering result in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
根据本发明的第四方面,本发明提供一种双目三维图形渲染与显示系统,包括:According to a fourth aspect of the present invention, the present invention provides a binocular three-dimensional graphics rendering and display system, comprising:
存储设备,用于保存包含三维图形的数据文件;a storage device for saving a data file containing a three-dimensional graphic;
处理器,用于对所述存储设备中的数据文件进行解析处理;a processor, configured to parse the data file in the storage device;
处理器内存,用于提供分别存放不同视点的数据的至少两个视帧缓冲区;Processor memory for providing at least two view frame buffers for respectively storing data of different viewpoints;
图形处理器,用于对所述处理器处理后的数据文件实现三维图形渲染,所述渲染包括使用如上所述的双目三维图形渲染方法,生成不同视点的视帧;a graphics processor, configured to implement three-dimensional graphics rendering on the processed data file by the processor, where the rendering includes generating a view frame of different viewpoints by using a binocular three-dimensional graphics rendering method as described above;
所述处理器内存还用于存放所述图形处理器生成的不同视点的视帧;The processor memory is further configured to store a view frame of different views generated by the graphics processor;
所述处理器还用于对所述不同视点的视帧进行合成,得到立体帧;The processor is further configured to synthesize the view frames of the different views to obtain a stereo frame;
三维显示器,用于显示所述立体帧。A three-dimensional display for displaying the stereoscopic frame.
依据本发明的双目三维图形渲染方法及相关系统,由于通过增加的 中平面,将近平面与远平面之间的图元投影到中平面上,使得近平面与中平面之间的图元会有出屏的立体效果,中平面与远平面之间的图元会有入屏的立体效果;从而,使得在3D显示设备中使用现有的渲染管线时不需要特别的硬件,就可以渲染“出屏”和“入屏”效果。The binocular three-dimensional graphics rendering method and related system according to the present invention are increased by In the middle plane, the primitive between the near plane and the far plane is projected onto the midplane, so that the primitive between the near plane and the middle plane has a stereoscopic effect, and the primitive between the middle plane and the far plane will have The stereo effect of entering the screen; thus, the existing "output screen" and "on screen" effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
附图说明DRAWINGS
图1是GPU渲染管线中投影变换阶段扩展的三维示意图;1 is a three-dimensional schematic diagram of a projection transformation stage extension in a GPU rendering pipeline;
图2是双目视差原理的示意图;Figure 2 is a schematic diagram of the principle of binocular parallax;
图3是本发明一种实施例的双目三维图形渲染与显示系统的结构示意图;3 is a schematic structural diagram of a binocular three-dimensional graphics rendering and display system according to an embodiment of the present invention;
图4是本发明一种实施例中GPU的图形处理管线;4 is a graphics processing pipeline of a GPU in an embodiment of the present invention;
图5是本发明一种实施例的立体影像重现方法的流程示意图。FIG. 5 is a schematic flow chart of a method for reproducing a stereo image according to an embodiment of the present invention.
具体实施方式detailed description
下面通过具体实施方式结合附图对本发明作进一步详细说明。The present invention will be further described in detail below with reference to the accompanying drawings.
首先对下面用到的一些术语或概念进行解释。左视帧,是指在三维影像显示中表示使观看者的左眼看到的二维视图;类似地,右视帧,是指在三维影像显示中表示使观看者的右眼看到的二维视图。立体帧,是指根据3D显示器的类型,将渲染后的左视帧和右视帧合成而得到的3D视帧。First explain some of the terms or concepts used below. The left-view frame refers to a two-dimensional view indicating the left eye of the viewer in the three-dimensional image display; similarly, the right-view frame refers to a two-dimensional view indicating the right eye of the viewer in the three-dimensional image display. . A stereoscopic frame refers to a 3D view frame obtained by combining a rendered left view frame and a right view frame according to the type of the 3D display.
已知OpenGL渲染管线,可以实时、高效地将三维信息转化为二维图像。但如前面背景技术的描述,现有的GPU采用的OpenGL渲染管线进行渲染时无法同时顾及入屏和出屏,也无法不需要GPU硬件支持就能实现基于双目视差的3D渲染。The OpenGL rendering pipeline is known to convert 3D information into 2D images in real time and efficiently. However, as described in the foregoing background art, the OpenGL rendering pipeline used by the existing GPU cannot simultaneously consider the screen entry and the screen when rendering, and can realize the 3D rendering based on binocular parallax without the GPU hardware support.
实施例1:Example 1:
为了使现有的GPU渲染管线既能渲染出入屏效果,也能渲染出屏效果,而不需要GPU硬件上有相应的支持,本实施例的双目三维图形渲染方法中,增加了一个中平面(middle plane),如图1所示,该中平面介于近平面和远平面之间,把中平面当作投影面,将近平面与远平面之间的图元投影到中平面上,则近平面与中平面之间的图元会有出屏的立体效果,中平面与远平面之间的图元会有入屏的立体效果。为了保证在裁剪阶段只保留近平面与远平面之间的图元,伪深度必须保持不变,即介于[-1,1]之间。In order to enable the existing GPU rendering pipeline to render the in-screen effect and render the screen effect without corresponding support on the GPU hardware, a binocular 3D graphics rendering method in this embodiment adds a midplane. (middle plane), as shown in FIG. 1, the middle plane is between the near plane and the far plane, and the middle plane is regarded as a projection plane, and the primitive between the near plane and the far plane is projected onto the middle plane, and then The primitive between the plane and the mid-plane has a three-dimensional effect of the screen, and the primitive between the middle plane and the far plane has a stereo effect of entering the screen. In order to ensure that only the primitives between the near plane and the far plane are preserved during the cropping phase, the pseudo depth must remain unchanged, ie between [-1, 1].
设近平面、中平面和远平面与原点的距离分别是N、M、F,近平面与远平面之间的伪深度是[-1,1],则透视变换矩阵M1如下所示: Let the distances of the near plane, the middle plane and the far plane from the origin be N, M, F, respectively. The pseudo depth between the near plane and the far plane is [-1, 1], then the perspective transformation matrix M 1 is as follows:
Figure PCTCN2015070601-appb-000001
Figure PCTCN2015070601-appb-000001
经过透视变换后,还需要将透视的结果进行位移和缩放操作,变换到[-1,1]之间的立方体内。设中平面的上、下、左、右的坐标分别是top、bottom、left和right,则位移和缩放操作的矩阵表示M2如下所示:After the perspective transformation, it is also necessary to shift and scale the result of the perspective into the cube between [-1, 1]. Let the coordinates of the top, bottom, left, and right of the midplane be top, bottom, left, and right, respectively. The matrix representation M 2 of the displacement and scaling operations is as follows:
Figure PCTCN2015070601-appb-000002
Figure PCTCN2015070601-appb-000002
可以用θ表示观察者视角的大小,投影面是中平面,则top、bottom、left和right可以通过下面的等式计算得到:You can use θ to represent the size of the observer's perspective. The projection surface is the mid-plane. Top, bottom, left, and right can be calculated by the following equation:
Figure PCTCN2015070601-appb-000003
Figure PCTCN2015070601-appb-000003
bottom=-topBottom=-top
Figure PCTCN2015070601-appb-000004
Figure PCTCN2015070601-appb-000004
left=-rightLeft=-right
由于模型视图变换阶段对视点由原点向水平方向上进行e/2和-e/2的位移,需要对投影变换做一定的修正。该修正与位移相关,设位移量用s表示,修正矩阵M3如下所示:Since the model view transformation stage shifts the viewpoint from the origin to the horizontal direction by e/2 and -e/2, it is necessary to make some corrections to the projection transformation. The correction is related to the displacement, and the displacement amount is represented by s, and the correction matrix M 3 is as follows:
Figure PCTCN2015070601-appb-000005
Figure PCTCN2015070601-appb-000005
综上,投影变换阶段的矩阵可以用矩阵MPT表示如下:In summary, the matrix of the projection transformation stage can be represented by the matrix M PT as follows:
Figure PCTCN2015070601-appb-000006
Figure PCTCN2015070601-appb-000006
在左眼的投影矩阵MPT中,s=-e/2;在右眼的投影矩阵MPT中,s=e/2。In the projection matrix M PT of the left eye, s=-e/2; in the projection matrix M PT of the right eye, s=e/2.
实施例2:Example 2:
图2显示出由于双目视差而产生左右眼角度不同的视图,存储左右眼视图的缓冲区分别为左视帧和右视帧,左右视帧中的对象物体与景深有关。2 shows a view in which the angles of the left and right eyes are different due to binocular parallax, and the buffers for storing the left and right eye views are the left view frame and the right view frame, respectively, and the object objects in the left and right view frames are related to the depth of field.
本实施例基于双目视差原理和GPU渲染管线,提供了一种双目三维图形渲染与显示系统,如图3所示,为该系统的结构示意图,该系统包括五个模块:The present embodiment provides a binocular three-dimensional graphics rendering and display system based on the binocular parallax principle and the GPU rendering pipeline. As shown in FIG. 3, it is a schematic structural diagram of the system, and the system includes five modules:
(1)外部存储设备200:用于存储场景数据,如3D网格数据、图像数据、配置数据等;(1) The external storage device 200 is configured to store scene data, such as 3D grid data, image data, configuration data, and the like;
(2)处理器(CPU)201:用于对文件的解析、场景数据的处理、以及立体帧的合成操作;(2) a processor (CPU) 201: for parsing files, processing of scene data, and synthesizing operations of stereoscopic frames;
(3)GPU 202:实现图形渲染管线的主要部件,生成左右视图;(3) GPU 202: implements the main components of the graphics rendering pipeline to generate left and right views;
(4)处理器内存204:存储左右视帧缓冲区以及程序数据;(4) processor memory 204: storing left and right view frame buffers and program data;
(5)3D显示器203:用于显示立体视帧。(5) 3D display 203: for displaying stereoscopic frames.
CPU将场景数据从外部存储器的文件中解析出来,存入处理器内存中,根据渲染命令,选择性地把场景数据通过CPU或者DMA(Direct Memory Access,直接内存存取)发送到GPU硬件中,通过GPU的渲染管线,完成场景的3D渲染。GPU需要对场景数据分别进行左右视帧渲染,每次渲染完成后,需要将帧缓冲区中的数据由GPU传输到处理器内存中。左右视帧场景渲染完成后,根据3D显示器的类型,将处理器内存中的左右视帧合成双目3D视帧,再把3D视帧传输到GPU的帧缓冲区中,最后在3D显示器上显示。The CPU parses the scene data from the file of the external memory, stores it in the processor memory, and selectively sends the scene data to the GPU hardware through a CPU or DMA (Direct Memory Access) according to the rendering command. The 3D rendering of the scene is done through the GPU's rendering pipeline. The GPU needs to perform left and right view frame rendering on the scene data. After each rendering, the data in the frame buffer needs to be transferred from the GPU to the processor memory. After the rendering of the left and right view frames is completed, according to the type of the 3D display, the left and right view frames in the processor memory are combined into a binocular 3D view frame, and then the 3D view frame is transmitted to the frame buffer of the GPU, and finally displayed on the 3D display. .
图4是GPU的渲染管线,存储在内存中的顶点数据300称为对象坐标,首先会经过模型视图矩阵301阶段的变换,把对象坐标变换成视点坐标,视点坐标的原点是摄像机的位置。视点坐标经过投影矩阵302阶段的变换,包括透视、位移和缩放变换三步,将坐标控件变换为[-1,1]之间的立方体,此时得到的坐标称为裁剪坐标,坐标的z分量称为伪深度。在投影矩阵阶段的变换,还会进行平面着色或者平滑着色处理,若增加了光照和纹理,还会实现顶点间像素值的计算。接着是裁剪303阶段,裁剪掉非[-1,1]立方体区域内的图元,只有在[-1,1]立体区域内的图元对用户是可见的。对可见片断的四元齐次坐标完成透视除法304操作, 得到三元的归一化的设备坐标;再经过窗口变换305转换为屏幕上的坐标。为了提高对纹理数据的处理效率,例如纹理存储格式在内存与GPU之间的转换,GPU提供了专用的数据通道。通过GPU渲染管线,三维信息可以实时、高效地转化为二维图像,存储在GPU中的帧缓冲区中,用于显示器的显示。4 is a rendering pipeline of the GPU. The vertex data 300 stored in the memory is called object coordinates. First, the transformation of the object view coordinates into viewpoint coordinates is performed through the transformation of the model view matrix 301. The origin of the viewpoint coordinates is the position of the camera. The viewpoint coordinates are transformed by the transformation matrix 302 stage, including three steps of perspective, displacement and scaling transformation, and the coordinate control is transformed into a cube between [-1, 1]. The coordinates obtained at this time are called the clipping coordinates, and the z component of the coordinates. Called pseudo depth. In the transformation of the projection matrix stage, plane shading or smooth shading is also performed. If the lighting and texture are added, the calculation of the pixel values between the vertex is also implemented. Next is the cropping 303 stage, cropping the primitives in the non-[-1,1] cube area, and only the primitives in the [-1,1] solid area are visible to the user. Performing the perspective division 304 operation on the quaternary homogeneous coordinates of the visible segment, A ternary normalized device coordinate is obtained; it is then converted to coordinates on the screen by window transformation 305. In order to improve the processing efficiency of texture data, such as texture storage format conversion between memory and GPU, GPU provides a dedicated data channel. Through the GPU rendering pipeline, the 3D information can be converted into a 2D image in real time and efficiently, and stored in a frame buffer in the GPU for display of the display.
本实施例所涉及的渲染中的投影变换过程可以采用实施例1的方法,渲染中涉及的其它过程如模型视图变换、以及后续的合成、显示等处理可以参考常用的相关技术实现,此处不作详述。The projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
实施例3:Example 3:
依据本发明的一种实施方式,提供一种立体影像重现方法,包括:According to an embodiment of the present invention, a stereoscopic image reproduction method is provided, including:
创建步骤:创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区;Creating step: creating at least two view frame buffers for respectively storing image data of different viewpoints;
渲染步骤:接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,该渲染为采用实施例1的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering step: receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering is using the binocular three-dimensional graphics rendering method of Embodiment 1, and storing the rendering result in a corresponding view frame buffer ;
合成步骤:将至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出立体帧。Synthesizing step: synthesizing the rendering results in at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frame.
以左视点和右视点两种视点为例,如图5所示,本实施例的立体影像重现方法。首先读取3D显示器的属性信息,包括显示器的长、宽、分辨率、类型(比如左右并列或者上下并列等)等信息;初始化处理器内存中左右视点的缓冲区,把两个缓冲区清空;设置GPU渲染管线中左视点,把两个缓冲区清空;设置GPU渲染管线中左视点的投影矩阵和模型视图矩阵;渲染整个场景,将结果存储进左缓冲区中;设置GPU渲染管线中右视点的投影矩阵和模型视图矩阵;渲染整个场景,将渲染结果存储进右缓冲区中;根据3D显示器类型,把左右视帧进行格式转换或合成,再拷贝到帧缓冲区中;最后,双目3D视帧会呈现到3D显示器上。Taking the two viewpoints of the left viewpoint and the right viewpoint as an example, as shown in FIG. 5, the stereoscopic image reproduction method of this embodiment. First read the attribute information of the 3D display, including the length, width, resolution, type of the display (such as side by side or side by side, etc.) and other information; initialize the buffer of the left and right viewpoints in the processor memory, and clear the two buffers; Set the left viewpoint in the GPU rendering pipeline, clear the two buffers; set the projection matrix and model view matrix of the left viewpoint in the GPU rendering pipeline; render the entire scene, store the result in the left buffer; set the right viewpoint in the GPU rendering pipeline Projection matrix and model view matrix; render the entire scene, store the rendering result into the right buffer; according to the 3D display type, format or synthesize the left and right frames, and then copy to the frame buffer; finally, binocular 3D The view frame is rendered onto the 3D display.
本实施例所涉及的渲染中的投影变换过程可以采用实施例1的方法,渲染中涉及的其它过程如模型视图变换、以及后续的合成、显示等处理可以参考常用的相关技术实现,此处不作详述。The projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
基于上述方法实施例,本发明还提供一种立体影像重现系统,包括:Based on the foregoing method embodiment, the present invention further provides a stereoscopic image reproduction system, including:
创建模块,用于创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区; Creating a module for creating at least two view frame buffers for respectively storing image data of different viewpoints;
渲染模块,用于接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,所述渲染为采用前述实施例的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering module, configured to receive data of the three-dimensional graphics of the at least two viewpoints, and render the data of each of the viewpoints, where the rendering is performed by using the binocular three-dimensional graphics rendering method of the foregoing embodiment, and the rendering result is stored in the corresponding view. Frame buffer
合成模块,用于将至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出立体帧。The synthesizing module is configured to synthesize the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
各模块的实现参考前述实施例,此处不作重述。The implementation of each module refers to the foregoing embodiment, and is not repeated herein.
综上,本发明涉及双目3D图形渲染方法及其系统,其基于GPU渲染管线和双目视差原理,提供一种兼容OpenGL的双目3D渲染方法和系统,能在快门、偏振、裸眼等方式的3D显示器上显示出立体3D效果,同时兼容传统图形学中的特效渲染算法,如:粒子系统、纹理着色、阴影等。采用本方法得到的双目3D图形和3D视频一样可呈现具有景深和层次感的立体3D世界。In summary, the present invention relates to a binocular 3D graphics rendering method and system thereof, which is based on a GPU rendering pipeline and a binocular parallax principle, and provides an OpenGL-compatible binocular 3D rendering method and system capable of shutter, polarization, naked eye, and the like. The 3D display shows stereoscopic 3D effects and is compatible with special effects rendering algorithms in traditional graphics, such as particle systems, texture shading, and shadows. The binocular 3D graphics obtained by the method can present a stereoscopic 3D world with depth of field and layering as well as 3D video.
本发明通过3D显示器呈现出立体3D效果,解决了GPU硬件不支持OpenGL双目3D渲染API的情况下如何渲染出立体3D效果的问题。所采用的方法是,基于现有的GPU渲染管线,充分地利用该管线硬件加速的特性,提高处理效率,兼容OpenGL的大部分应用编程接口,使用已有的OpenGL应用编程接口实时渲染立体3D场景。基于双目视差原理,根据左右眼的视差和物体的深度,产生不同的左右视帧,最后根据3D显示器的类型,合成一个双目3D视帧。The invention presents a stereoscopic 3D effect through the 3D display, and solves the problem of how to render the stereoscopic 3D effect when the GPU hardware does not support the OpenGL binocular 3D rendering API. The method adopted is based on the existing GPU rendering pipeline, fully utilizing the hardware acceleration characteristics of the pipeline, improving processing efficiency, compatible with most application programming interfaces of OpenGL, and real-time rendering stereoscopic 3D scenes using the existing OpenGL application programming interface. . Based on the binocular parallax principle, different left and right viewing frames are generated according to the parallax of the left and right eyes and the depth of the object, and finally a binocular 3D viewing frame is synthesized according to the type of the 3D display.
针对不支持双目3D渲染的GPU(即没提供立体渲染左右视帧缓冲区)硬件平台,本发明提供的双目3D渲染方法和系统中,需要左右两个视帧(多视点的情况下需要多个视帧),可以在处理器内存中创建两个视帧缓冲区,一个用于存储左视图,另一个用于存储右视图;再根据3D显示器的立体3D视帧格式进行合成,本方法还兼容传统图形学中的大部分特效渲染算法。For a hardware platform that does not support binocular 3D rendering (ie, does not provide stereoscopic rendering left and right frame buffers), the binocular 3D rendering method and system provided by the present invention require two left and right viewing frames (required for multi-view situations) Multiple view frames), two view frame buffers can be created in the processor memory, one for storing the left view and the other for storing the right view; and then synthesizing according to the stereoscopic 3D view frame format of the 3D display, the method It is also compatible with most of the special effects rendering algorithms in traditional graphics.
由顶点、纹理等构造的场景只有一个,可以分两次渲染相同的场景数据,每次渲染时调整模型视图矩阵和投影矩阵,将两次渲染的结果分别保存进左右视帧缓冲区内,即把GPU帧缓冲区中的数据拷贝到处理器内存中的视帧缓冲区。在得到左右视帧的图像数据后,根据3D显示器的类型,将左右视帧合成双目3D视帧;最后,将3D视帧拷贝到GPU上的帧缓冲区中,并在3D显示器上呈现。所涉及的渲染过程包括:设置左视点的模型视图矩阵和投影矩阵,渲染场景数据,将帧缓冲区中的 数据拷贝到处理器内存的左视帧缓冲区中;设置右视点的模型视图矩阵和投影矩阵,渲染场景数据,将帧缓冲区中的数据拷贝到处理器内存的右视帧缓冲区中;根据3D显示器的类型,将两帧数据合成双目3D视帧,发送到GPU上的帧缓冲区中;最后,渲染到3D显示器上。There is only one scene constructed by vertices, textures, etc., and the same scene data can be rendered twice. The model view matrix and the projection matrix are adjusted each time the rendering is performed, and the results of the two renderings are respectively saved into the left and right frame buffers, that is, Copy the data in the GPU frame buffer to the view frame buffer in the processor memory. After obtaining the image data of the left and right view frames, the left and right view frames are combined into a binocular 3D view frame according to the type of the 3D display; finally, the 3D view frame is copied to the frame buffer on the GPU and presented on the 3D display. The rendering process involved includes: setting the model view matrix and projection matrix of the left viewpoint, rendering the scene data, and the frame buffer Copy the data into the left view frame buffer of the processor memory; set the model view matrix and projection matrix of the right view, render the scene data, and copy the data in the frame buffer to the right view frame buffer of the processor memory; The type of 3D display combines two frames of data into a binocular 3D view frame, which is sent to the frame buffer on the GPU; finally, it is rendered onto the 3D display.
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分步骤可以通过程序来指令相关硬件完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等。A person skilled in the art may understand that all or part of the steps of the various methods in the above embodiments may be completed by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, and the storage medium may include: a read only memory, Random access memory, disk or optical disk, etc.
以上采用了具体个例对本发明进行阐述,只是用于帮助理解本发明,但不限制本发明的应用范围。对于本领域的一般技术人员,依据本发明的思想,可以在上述具体实施方式的基础上进行变化。 The invention has been described above in terms of specific examples, which are merely used to help the understanding of the invention, but do not limit the scope of application of the invention. Variations on the basis of the above-described specific embodiments can be made by those skilled in the art based on the idea of the present invention.

Claims (8)

  1. 一种双目三维图形渲染方法,包括投影变换步骤,其特征在于,在近平面和远平面之间增加中平面作为投影面,将近平面与远平面之间的图元投影到中平面上。A binocular three-dimensional graphics rendering method includes a projection transformation step, wherein a midplane is added as a projection plane between a near plane and a far plane, and a primitive between the near plane and the far plane is projected onto the midplane.
  2. 如权利要求1所述的双目三维图形渲染方法,其特征在于,所述投影变换步骤采用的投影矩阵为:The binocular three-dimensional graphics rendering method according to claim 1, wherein the projection matrix used in the projection transformation step is:
    Figure PCTCN2015070601-appb-100001
    Figure PCTCN2015070601-appb-100001
    其中,MPT表示投影矩阵,M表示中平面与原点的距离,N表示近平面与原点的距离,F表示远平面与原点的距离,
    Figure PCTCN2015070601-appb-100002
    bottom=-top,
    Figure PCTCN2015070601-appb-100003
    left=-right,θ表示视角的大小,s表示视点由原点向水平方向上进行的位移量。
    Where M PT represents the projection matrix, M represents the distance between the midplane and the origin, N represents the distance between the near plane and the origin, and F represents the distance between the far plane and the origin.
    Figure PCTCN2015070601-appb-100002
    Bottom=-top,
    Figure PCTCN2015070601-appb-100003
    Left=-right, θ represents the size of the angle of view, and s represents the amount of displacement of the viewpoint from the origin to the horizontal direction.
  3. 一种立体影像重现方法,其特征在于,包括:A stereoscopic image reproduction method, comprising:
    创建步骤:创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区;Creating step: creating at least two view frame buffers for respectively storing image data of different viewpoints;
    渲染步骤:接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,所述渲染包括使用如权利要求1或2所述的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering step: receiving data of a three-dimensional graphic of at least two viewpoints, rendering data of each viewpoint, the rendering comprising storing the rendering result by using the binocular three-dimensional graphics rendering method according to claim 1 or 2 Corresponding view frame buffer;
    合成步骤:将所述至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出所述立体帧。The synthesizing step is: synthesizing the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frames.
  4. 如权利要求3所述的立体影像重现方法,其特征在于,The stereoscopic image reproduction method according to claim 3, wherein
    所述创建步骤包括:在内存中创建并初始化左视帧缓冲区和右视帧缓冲区;The creating step includes: creating and initializing a left-view frame buffer and a right-view frame buffer in the memory;
    所述渲染步骤包括:接收包含三维图形的左视帧的数据,对左视帧的数据进行渲染,将渲染结果存入所述左视帧缓冲区;接收包含三维图形的右视帧的数据,对右视帧的数据进行渲染,将渲染结果存入所述右视帧缓冲区;The rendering step includes: receiving data of a left-view frame including a three-dimensional graphic, rendering data of the left-view frame, storing the rendering result in the left-view frame buffer, and receiving data of a right-view frame including the three-dimensional graphic, Rendering data of a right-view frame, and storing the rendering result in the right-view frame buffer;
    所述合成步骤包括:将所述左视帧缓冲区中的渲染结果和所述右视帧缓冲区中的渲染结果进行拼接得到立体帧,将所述立体帧存入图形处 理器的帧缓冲区。The synthesizing step includes: splicing the rendering result in the left-view frame buffer and the rendering result in the right-view frame buffer to obtain a stereoscopic frame, and storing the stereoscopic frame in the graphic The frame buffer of the processor.
  5. 如权利要求3所述的立体影像重现方法,其特征在于,还包括显示步骤,显示所述立体帧。The stereoscopic image reproduction method according to claim 3, further comprising a displaying step of displaying the stereoscopic frame.
  6. 一种立体影像重现系统,其特征在于,包括:A stereoscopic image reproduction system, comprising:
    创建模块,用于创建用于分别存放不同视点的图像数据的至少两个视帧缓冲区;Creating a module for creating at least two view frame buffers for respectively storing image data of different viewpoints;
    渲染模块,用于接收至少两种视点的包含三维图形的数据,对每个视点的数据进行渲染,所述渲染包括使用如权利要求1或2所述的双目三维图形渲染方法,将渲染结果存入对应的视帧缓冲区;a rendering module, configured to receive data of a three-dimensional graphic of at least two viewpoints, and render data of each viewpoint, the rendering comprising using the binocular three-dimensional graphics rendering method according to claim 1 or 2 to render the rendering result Stored in the corresponding view frame buffer;
    合成模块,用于将所述至少两个视帧缓冲区中的渲染结果进行合成得到立体帧,输出所述立体帧。And a synthesizing module, configured to synthesize the rendering result in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
  7. 如权利要求6所述的立体影像重现系统,其特征在于,还包括显示模块,用于显示所述立体帧。The stereoscopic image reproduction system according to claim 6, further comprising a display module for displaying the stereoscopic frame.
  8. 一种双目三维图形渲染与显示系统,其特征在于,包括:A binocular three-dimensional graphics rendering and display system, comprising:
    存储设备,用于保存包含三维图形的数据文件;a storage device for saving a data file containing a three-dimensional graphic;
    处理器,用于对所述存储设备中的数据文件进行解析处理;a processor, configured to parse the data file in the storage device;
    处理器内存,用于提供分别存放不同视点的数据的至少两个视帧缓冲区;Processor memory for providing at least two view frame buffers for respectively storing data of different viewpoints;
    图形处理器,用于对所述处理器处理后的数据文件实现三维图形渲染,所述渲染包括使用如权利要求1或2所述的双目三维图形渲染方法,生成不同视点的视帧;a graphics processor, configured to implement three-dimensional graphics rendering on the data file processed by the processor, where the rendering comprises using a binocular three-dimensional graphics rendering method according to claim 1 or 2 to generate a view frame of different viewpoints;
    所述处理器内存还用于存放所述图形处理器生成的不同视点的视帧;The processor memory is further configured to store a view frame of different views generated by the graphics processor;
    所述处理器还用于对所述不同视点的视帧进行合成,得到立体帧;The processor is further configured to synthesize the view frames of the different views to obtain a stereo frame;
    三维显示器,用于显示所述立体帧。 A three-dimensional display for displaying the stereoscopic frame.
PCT/CN2015/070601 2014-06-27 2015-01-13 Binocular three-dimensional graphic rendering method and related system WO2015196791A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410302221.4A CN105224288B (en) 2014-06-27 2014-06-27 Binocular three-dimensional method for rendering graph and related system
CN201410302221.4 2014-06-27

Publications (1)

Publication Number Publication Date
WO2015196791A1 true WO2015196791A1 (en) 2015-12-30

Family

ID=54936686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/070601 WO2015196791A1 (en) 2014-06-27 2015-01-13 Binocular three-dimensional graphic rendering method and related system

Country Status (2)

Country Link
CN (1) CN105224288B (en)
WO (1) WO2015196791A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354062A (en) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 Multi-dimensional spatial data rendering method and device
CN111953956A (en) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN113538648A (en) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 Image rendering method, device, equipment and computer readable storage medium
CN115170740A (en) * 2022-07-22 2022-10-11 北京字跳网络技术有限公司 Special effect processing method and device, electronic equipment and storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360721B2 (en) * 2016-05-26 2019-07-23 Mediatek Inc. Method and apparatus for signaling region of interests
CN106204703A (en) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 Three-dimensional scene models rendering intent and device
CN106548500A (en) * 2016-09-26 2017-03-29 中国电子科技集团公司第二十九研究所 A kind of two-dimension situation image processing method and device based on GPU
CN107103626A (en) * 2017-02-17 2017-08-29 杭州电子科技大学 A kind of scene reconstruction method based on smart mobile phone
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN107330846B (en) * 2017-06-16 2019-07-30 浙江大学 A kind of binocular rendering pipeline process and method based on screen block pair
CN108144292A (en) * 2018-01-30 2018-06-12 河南三阳光电有限公司 Bore hole 3D interactive game making apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009075739A (en) * 2007-09-19 2009-04-09 Namco Bandai Games Inc Program, information storage medium and image generation system
CN101477700A (en) * 2009-02-06 2009-07-08 南京师范大学 Real tri-dimension display method oriented to Google Earth and Sketch Up
CN101593357A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 A kind of interactive body cutting method based on the three-dimensional planar control

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144384A (en) * 1996-02-20 2000-11-07 Yugen Kashia Aloalo International Voxel data processing using attributes thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009075739A (en) * 2007-09-19 2009-04-09 Namco Bandai Games Inc Program, information storage medium and image generation system
CN101593357A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 A kind of interactive body cutting method based on the three-dimensional planar control
CN101477700A (en) * 2009-02-06 2009-07-08 南京师范大学 Real tri-dimension display method oriented to Google Earth and Sketch Up

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354062A (en) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 Multi-dimensional spatial data rendering method and device
CN111953956A (en) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN111953956B (en) * 2020-08-04 2022-04-12 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN113538648A (en) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 Image rendering method, device, equipment and computer readable storage medium
CN113538648B (en) * 2021-07-27 2024-04-30 歌尔科技有限公司 Image rendering method, device, equipment and computer readable storage medium
CN115170740A (en) * 2022-07-22 2022-10-11 北京字跳网络技术有限公司 Special effect processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105224288A (en) 2016-01-06
CN105224288B (en) 2018-01-23

Similar Documents

Publication Publication Date Title
WO2015196791A1 (en) Binocular three-dimensional graphic rendering method and related system
US10839591B2 (en) Stereoscopic rendering using raymarching and a virtual view broadcaster for such rendering
KR101697184B1 (en) Apparatus and Method for generating mesh, and apparatus and method for processing image
JP4489610B2 (en) Stereoscopic display device and method
US7675513B2 (en) System and method for displaying stereo images
JP2005151534A (en) Pseudo three-dimensional image creation device and method, and pseudo three-dimensional image display system
US8866887B2 (en) Computer graphics video synthesizing device and method, and display device
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
JP2009163724A (en) Graphics interface, method for rasterizing graphics data and computer readable recording medium
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
US20130321409A1 (en) Method and system for rendering a stereoscopic view
JP4996922B2 (en) 3D visualization
KR101208767B1 (en) Stereoscopic image generation method, device and system using circular projection and recording medium for the same
US10110876B1 (en) System and method for displaying images in 3-D stereo
US10931927B2 (en) Method and system for re-projection for multiple-view displays
KR101567002B1 (en) Computer graphics based stereo floting integral imaging creation system
Harish et al. A view-dependent, polyhedral 3D display
KR101003060B1 (en) Method for producing motion picture frames for stereoscopic video
KR100556830B1 (en) 3D graphical model rendering apparatus and method for displaying stereoscopic image
Jeong et al. 60.2: Efficient Light‐field Rendering using Depth Maps
JP6025519B2 (en) Image processing apparatus and image processing method
WO2023049087A1 (en) Portal view for content items
TW201926256A (en) Building VR environment from movie
Chuchvara Real-time video-plus-depth content creation utilizing time-of-flight sensor-from capture to display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15811509

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15811509

Country of ref document: EP

Kind code of ref document: A1