WO2015196791A1 - Procédé de rendu graphique tridimensionnel binoculaire et système associé - Google Patents

Procédé de rendu graphique tridimensionnel binoculaire et système associé Download PDF

Info

Publication number
WO2015196791A1
WO2015196791A1 PCT/CN2015/070601 CN2015070601W WO2015196791A1 WO 2015196791 A1 WO2015196791 A1 WO 2015196791A1 CN 2015070601 W CN2015070601 W CN 2015070601W WO 2015196791 A1 WO2015196791 A1 WO 2015196791A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
stereoscopic
plane
view
view frame
Prior art date
Application number
PCT/CN2015/070601
Other languages
English (en)
Chinese (zh)
Inventor
王文敏
张建龙
王荣刚
董胜富
王振宇
李英
高文
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Publication of WO2015196791A1 publication Critical patent/WO2015196791A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present invention relates to the field of stereoscopic vision processing technologies, and in particular, to a binocular three-dimensional (Stereoscopic 3D) graphics rendering method and related system.
  • stereoscopic vision processing technologies and in particular, to a binocular three-dimensional (Stereoscopic 3D) graphics rendering method and related system.
  • the real world is a three-dimensional three-dimensional world.
  • the human eye is watching the three-dimensional world, since the eyes are horizontally separated at two different positions, the images of the objects seen are different.
  • the visual angles of the images seen by the left and right eyes are different, and are called left view and right view, respectively.
  • the 3D display is designed based on the principle of binocular parallax.
  • the 3D graphics rendering pipeline is responsible for performing a series of necessary steps to convert the 3D scene into a 2D image that can be displayed on the display.
  • the 3D graphics rendering pipeline typically includes the following steps: conversion from a local coordinate system to a world coordinate system; conversion from a world coordinate system to a view coordinate system; projection transformation; and viewport transformation.
  • a popular graphics API Application Programming Interface
  • OpenGL Open Graphic Library
  • OpenGL can be used for monocular rendering. It is a cross-platform, cross-programming API suitable for rendering stereoscopic 3D graphics on traditional 2D displays. OpenGL is usually supported on the GPU hardware platform. The GPU rendering pipeline is a hardware acceleration and efficient process for converting 3D information into 2D images. OpenGL also provides an application programming interface for binocular 3D rendering, but requires corresponding support on the GPU hardware, otherwise OpenGL will not be able to present binocular 3D effects on 3D displays.
  • the stereoscopic effect mainly includes two types of screen entry and screen release.
  • the screen entry means that the object seen seems to be behind the screen, and the screen appears to be seen.
  • the effect of the object is like the front of the screen.
  • the screen can give a sense of fire to the human sensory effect.
  • the viewpoint is at the origin and observed along the -z direction, forming a pyramid-shaped frustum, that is, two far and near, parallel to each other. A cone formed by truncation of planes (called far and near planes).
  • any primitives outside the cone will be cropped, and the primitives left in the cone will undergo a perspective transformation. Shadowing onto the near plane, the pseudo depth obtained by the perspective transformation is used as the basis for judging whether the pixel is visible or not. If you simply use the viewpoint displacement and depth information, only the primitives behind the near plane will be projected onto the near plane, which can only achieve the stereo effect of entering the screen.
  • the present invention provides a binocular three-dimensional graphics rendering method, including a projection transformation step, the projection transformation step comprising: adding a midplane as a projection plane between a near plane and a far plane, and a near plane The primitives between the far planes are projected onto the midplane.
  • the present invention provides a stereoscopic image reproduction method, including:
  • Creating step creating at least two view frame buffers for respectively storing image data of different viewpoints;
  • a rendering step receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering comprising storing the rendering result into a corresponding video frame buffer using the binocular three-dimensional graphics rendering method as described above Area;
  • the synthesizing step is: synthesizing the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frames.
  • the present invention provides a stereoscopic image reproduction system, including:
  • a rendering module configured to receive data of a three-dimensional graphic of at least two viewpoints, and render data of each view, the rendering comprising storing the rendering result into a corresponding view by using a binocular three-dimensional graphics rendering method as described above Frame buffer
  • a synthesizing module configured to synthesize the rendering result in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
  • the present invention provides a binocular three-dimensional graphics rendering and display system, comprising:
  • a storage device for saving a data file containing a three-dimensional graphic
  • a processor configured to parse the data file in the storage device
  • Processor memory for providing at least two view frame buffers for respectively storing data of different viewpoints
  • a graphics processor configured to implement three-dimensional graphics rendering on the processed data file by the processor, where the rendering includes generating a view frame of different viewpoints by using a binocular three-dimensional graphics rendering method as described above;
  • the processor memory is further configured to store a view frame of different views generated by the graphics processor
  • the processor is further configured to synthesize the view frames of the different views to obtain a stereo frame
  • a three-dimensional display for displaying the stereoscopic frame is a three-dimensional display for displaying the stereoscopic frame.
  • the binocular three-dimensional graphics rendering method and related system according to the present invention are increased by In the middle plane, the primitive between the near plane and the far plane is projected onto the midplane, so that the primitive between the near plane and the middle plane has a stereoscopic effect, and the primitive between the middle plane and the far plane will have The stereo effect of entering the screen; thus, the existing "output screen” and “on screen” effects can be rendered without using special hardware when using the existing rendering pipeline in the 3D display device.
  • 1 is a three-dimensional schematic diagram of a projection transformation stage extension in a GPU rendering pipeline
  • Figure 2 is a schematic diagram of the principle of binocular parallax
  • FIG. 3 is a schematic structural diagram of a binocular three-dimensional graphics rendering and display system according to an embodiment of the present invention.
  • FIG. 4 is a graphics processing pipeline of a GPU in an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of a method for reproducing a stereo image according to an embodiment of the present invention.
  • the left-view frame refers to a two-dimensional view indicating the left eye of the viewer in the three-dimensional image display; similarly, the right-view frame refers to a two-dimensional view indicating the right eye of the viewer in the three-dimensional image display.
  • a stereoscopic frame refers to a 3D view frame obtained by combining a rendered left view frame and a right view frame according to the type of the 3D display.
  • the OpenGL rendering pipeline is known to convert 3D information into 2D images in real time and efficiently.
  • the OpenGL rendering pipeline used by the existing GPU cannot simultaneously consider the screen entry and the screen when rendering, and can realize the 3D rendering based on binocular parallax without the GPU hardware support.
  • a binocular 3D graphics rendering method in this embodiment adds a midplane. (middle plane), as shown in FIG. 1, the middle plane is between the near plane and the far plane, and the middle plane is regarded as a projection plane, and the primitive between the near plane and the far plane is projected onto the middle plane, and then The primitive between the plane and the mid-plane has a three-dimensional effect of the screen, and the primitive between the middle plane and the far plane has a stereo effect of entering the screen.
  • the pseudo depth must remain unchanged, ie between [-1, 1].
  • the correction is related to the displacement, and the displacement amount is represented by s, and the correction matrix M 3 is as follows:
  • the matrix of the projection transformation stage can be represented by the matrix M PT as follows:
  • the buffers for storing the left and right eye views are the left view frame and the right view frame, respectively, and the object objects in the left and right view frames are related to the depth of field.
  • the present embodiment provides a binocular three-dimensional graphics rendering and display system based on the binocular parallax principle and the GPU rendering pipeline.
  • FIG. 3 it is a schematic structural diagram of the system, and the system includes five modules:
  • the external storage device 200 is configured to store scene data, such as 3D grid data, image data, configuration data, and the like;
  • a processor (CPU) 201 for parsing files, processing of scene data, and synthesizing operations of stereoscopic frames;
  • GPU 202 implements the main components of the graphics rendering pipeline to generate left and right views
  • processor memory 204 storing left and right view frame buffers and program data
  • 3D display 203 for displaying stereoscopic frames.
  • the CPU parses the scene data from the file of the external memory, stores it in the processor memory, and selectively sends the scene data to the GPU hardware through a CPU or DMA (Direct Memory Access) according to the rendering command.
  • the 3D rendering of the scene is done through the GPU's rendering pipeline.
  • the GPU needs to perform left and right view frame rendering on the scene data. After each rendering, the data in the frame buffer needs to be transferred from the GPU to the processor memory.
  • the left and right view frames in the processor memory are combined into a binocular 3D view frame, and then the 3D view frame is transmitted to the frame buffer of the GPU, and finally displayed on the 3D display. .
  • the vertex data 300 stored in the memory is called object coordinates.
  • the transformation of the object view coordinates into viewpoint coordinates is performed through the transformation of the model view matrix 301.
  • the origin of the viewpoint coordinates is the position of the camera.
  • the viewpoint coordinates are transformed by the transformation matrix 302 stage, including three steps of perspective, displacement and scaling transformation, and the coordinate control is transformed into a cube between [-1, 1].
  • the coordinates obtained at this time are called the clipping coordinates, and the z component of the coordinates. Called pseudo depth.
  • plane shading or smooth shading is also performed. If the lighting and texture are added, the calculation of the pixel values between the vertex is also implemented.
  • GPU provides a dedicated data channel.
  • the 3D information can be converted into a 2D image in real time and efficiently, and stored in a frame buffer in the GPU for display of the display.
  • the projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
  • a stereoscopic image reproduction method including:
  • Creating step creating at least two view frame buffers for respectively storing image data of different viewpoints;
  • a rendering step receiving data of a three-dimensional graphic of at least two viewpoints, and rendering data of each viewpoint, the rendering is using the binocular three-dimensional graphics rendering method of Embodiment 1, and storing the rendering result in a corresponding view frame buffer ;
  • Synthesizing step synthesizing the rendering results in at least two view frame buffers to obtain a stereoscopic frame, and outputting the stereoscopic frame.
  • the stereoscopic image reproduction method of this embodiment First read the attribute information of the 3D display, including the length, width, resolution, type of the display (such as side by side or side by side, etc.) and other information; initialize the buffer of the left and right viewpoints in the processor memory, and clear the two buffers; Set the left viewpoint in the GPU rendering pipeline, clear the two buffers; set the projection matrix and model view matrix of the left viewpoint in the GPU rendering pipeline; render the entire scene, store the result in the left buffer; set the right viewpoint in the GPU rendering pipeline Projection matrix and model view matrix; render the entire scene, store the rendering result into the right buffer; according to the 3D display type, format or synthesize the left and right frames, and then copy to the frame buffer; finally, binocular 3D The view frame is rendered onto the 3D display.
  • the projection transformation process in the rendering according to the embodiment may adopt the method of Embodiment 1, and other processes involved in the rendering, such as model view transformation, and subsequent synthesis, display, and the like may be implemented by referring to commonly used related technologies, and are not used herein. Detailed.
  • the present invention further provides a stereoscopic image reproduction system, including:
  • a rendering module configured to receive data of the three-dimensional graphics of the at least two viewpoints, and render the data of each of the viewpoints, where the rendering is performed by using the binocular three-dimensional graphics rendering method of the foregoing embodiment, and the rendering result is stored in the corresponding view.
  • the synthesizing module is configured to synthesize the rendering results in the at least two view frame buffers to obtain a stereoscopic frame, and output the stereoscopic frame.
  • the present invention relates to a binocular 3D graphics rendering method and system thereof, which is based on a GPU rendering pipeline and a binocular parallax principle, and provides an OpenGL-compatible binocular 3D rendering method and system capable of shutter, polarization, naked eye, and the like.
  • the 3D display shows stereoscopic 3D effects and is compatible with special effects rendering algorithms in traditional graphics, such as particle systems, texture shading, and shadows.
  • the binocular 3D graphics obtained by the method can present a stereoscopic 3D world with depth of field and layering as well as 3D video.
  • the invention presents a stereoscopic 3D effect through the 3D display, and solves the problem of how to render the stereoscopic 3D effect when the GPU hardware does not support the OpenGL binocular 3D rendering API.
  • the method adopted is based on the existing GPU rendering pipeline, fully utilizing the hardware acceleration characteristics of the pipeline, improving processing efficiency, compatible with most application programming interfaces of OpenGL, and real-time rendering stereoscopic 3D scenes using the existing OpenGL application programming interface. .
  • Based on the binocular parallax principle different left and right viewing frames are generated according to the parallax of the left and right eyes and the depth of the object, and finally a binocular 3D viewing frame is synthesized according to the type of the 3D display.
  • the binocular 3D rendering method and system provided by the present invention require two left and right viewing frames (required for multi-view situations) Multiple view frames), two view frame buffers can be created in the processor memory, one for storing the left view and the other for storing the right view; and then synthesizing according to the stereoscopic 3D view frame format of the 3D display, the method It is also compatible with most of the special effects rendering algorithms in traditional graphics.
  • the model view matrix and the projection matrix are adjusted each time the rendering is performed, and the results of the two renderings are respectively saved into the left and right frame buffers, that is, Copy the data in the GPU frame buffer to the view frame buffer in the processor memory.
  • the left and right view frames are combined into a binocular 3D view frame according to the type of the 3D display; finally, the 3D view frame is copied to the frame buffer on the GPU and presented on the 3D display.
  • the rendering process involved includes: setting the model view matrix and projection matrix of the left viewpoint, rendering the scene data, and the frame buffer Copy the data into the left view frame buffer of the processor memory; set the model view matrix and projection matrix of the right view, render the scene data, and copy the data in the frame buffer to the right view frame buffer of the processor memory;
  • the type of 3D display combines two frames of data into a binocular 3D view frame, which is sent to the frame buffer on the GPU; finally, it is rendered onto the 3D display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé de rendu graphique tridimensionnel (3D) binoculaire et un système associé. Le procédé comprend une étape de transformation de projection. L'étape de transformation de projection consiste : à ajouter un plan médian entre un plan proche et un plan lointain à titre de plan de projection, et à projeter des primitives entre le plan proche et le plan lointain sur le plan médian. Les primitives entre le plan proche et le plan lointain sont projetées sur le plan médian au moyen du plan médian ajouté, de sorte que les primitives entre le plan proche et le plan médian permettent d'obtenir un effet stéréo de sortie de l'écran, et les primitives entre le plan médian et le plan lointain permettent d'obtenir un effet stéréo de rentrée dans l'écran. Par conséquent, aucun matériel spécifique n'est nécessaire pour rendre les effets de "sortie de l'écran" et de "rentrée dans l'écran" quand les pipelines de rendu existants sont utilisés dans un dispositif d'affichage 3D.
PCT/CN2015/070601 2014-06-27 2015-01-13 Procédé de rendu graphique tridimensionnel binoculaire et système associé WO2015196791A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410302221.4A CN105224288B (zh) 2014-06-27 2014-06-27 双目三维图形渲染方法及相关系统
CN201410302221.4 2014-06-27

Publications (1)

Publication Number Publication Date
WO2015196791A1 true WO2015196791A1 (fr) 2015-12-30

Family

ID=54936686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/070601 WO2015196791A1 (fr) 2014-06-27 2015-01-13 Procédé de rendu graphique tridimensionnel binoculaire et système associé

Country Status (2)

Country Link
CN (1) CN105224288B (fr)
WO (1) WO2015196791A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354062A (zh) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 一种多维空间数据渲染方法及装置
CN111953956A (zh) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 裸眼立体异型图像三维相机生成系统及其方法
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN115170740A (zh) * 2022-07-22 2022-10-11 北京字跳网络技术有限公司 特效处理方法、装置、电子设备及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360721B2 (en) * 2016-05-26 2019-07-23 Mediatek Inc. Method and apparatus for signaling region of interests
CN106204703A (zh) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 三维场景模型渲染方法和装置
CN106548500A (zh) * 2016-09-26 2017-03-29 中国电子科技集团公司第二十九研究所 一种基于gpu的二维态势图像处理方法及装置
CN107103626A (zh) * 2017-02-17 2017-08-29 杭州电子科技大学 一种基于智能手机的场景重建方法
CN107016704A (zh) * 2017-03-09 2017-08-04 杭州电子科技大学 一种基于增强现实的虚拟现实实现方法
CN107330846B (zh) 2017-06-16 2019-07-30 浙江大学 一种基于屏幕块对的双目渲染流水线流程与方法
CN108144292A (zh) * 2018-01-30 2018-06-12 河南三阳光电有限公司 裸眼3d互动游戏制作设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009075739A (ja) * 2007-09-19 2009-04-09 Namco Bandai Games Inc プログラム、情報記憶媒体、および画像生成システム
CN101477700A (zh) * 2009-02-06 2009-07-08 南京师范大学 面向Google Earth与Sketch Up的真三维立体显示方法
CN101593357A (zh) * 2008-05-28 2009-12-02 中国科学院自动化研究所 一种基于三维平面控件的交互式体切割方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144384A (en) * 1996-02-20 2000-11-07 Yugen Kashia Aloalo International Voxel data processing using attributes thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009075739A (ja) * 2007-09-19 2009-04-09 Namco Bandai Games Inc プログラム、情報記憶媒体、および画像生成システム
CN101593357A (zh) * 2008-05-28 2009-12-02 中国科学院自动化研究所 一种基于三维平面控件的交互式体切割方法
CN101477700A (zh) * 2009-02-06 2009-07-08 南京师范大学 面向Google Earth与Sketch Up的真三维立体显示方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354062A (zh) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 一种多维空间数据渲染方法及装置
CN111953956A (zh) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 裸眼立体异型图像三维相机生成系统及其方法
CN111953956B (zh) * 2020-08-04 2022-04-12 山东金东数字创意股份有限公司 裸眼立体异型图像三维相机生成系统及其方法
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN113538648B (zh) * 2021-07-27 2024-04-30 歌尔科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN115170740A (zh) * 2022-07-22 2022-10-11 北京字跳网络技术有限公司 特效处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN105224288B (zh) 2018-01-23
CN105224288A (zh) 2016-01-06

Similar Documents

Publication Publication Date Title
WO2015196791A1 (fr) Procédé de rendu graphique tridimensionnel binoculaire et système associé
US10839591B2 (en) Stereoscopic rendering using raymarching and a virtual view broadcaster for such rendering
KR101697184B1 (ko) 메쉬 생성 장치 및 그 방법, 그리고, 영상 처리 장치 및 그 방법
JP4489610B2 (ja) 立体視可能な表示装置および方法
US7675513B2 (en) System and method for displaying stereo images
JP2005151534A (ja) 擬似立体画像作成装置及び擬似立体画像作成方法並びに擬似立体画像表示システム
US8866887B2 (en) Computer graphics video synthesizing device and method, and display device
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
JP2009163724A (ja) グラフィックスインターフェイス、グラフィックスデータをラスタ化する方法およびコンピュータ読み取り可能な記録媒体
US20130321409A1 (en) Method and system for rendering a stereoscopic view
JP4996922B2 (ja) 立体映像化
EP3057316B1 (fr) Génération d'imagerie tridimensionnelle en complément du contenu existant
KR101208767B1 (ko) 곡면 투사를 이용한 입체 영상 생성 방법, 장치 및 시스템, 이를 위한 기록 매체
US10110876B1 (en) System and method for displaying images in 3-D stereo
US10931927B2 (en) Method and system for re-projection for multiple-view displays
KR101567002B1 (ko) 컴퓨터 그래픽스 기반의 스테레오 플로팅 집적 영상생성시스템
Harish et al. A view-dependent, polyhedral 3D display
KR101003060B1 (ko) 입체영상 제작을 위한 동영상 프레임 생성방법
KR100556830B1 (ko) 스테레오스코픽 입체영상 디스플레이를 위한 3차원 그래픽모델렌더링 장치 및 그 방법
Jeong et al. 60.2: Efficient Light‐field Rendering using Depth Maps
JP6025519B2 (ja) 画像処理装置および画像処理方法
WO2023049087A1 (fr) Vue par portail pour éléments de contenu
TW201926256A (zh) 建構電影場景的vr場景
Chuchvara Real-time video-plus-depth content creation utilizing time-of-flight sensor-from capture to display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15811509

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15811509

Country of ref document: EP

Kind code of ref document: A1