WO2017197951A1 - 增强现实场景中的渲染方法、处理模块和增强现实眼镜 - Google Patents
增强现实场景中的渲染方法、处理模块和增强现实眼镜 Download PDFInfo
- Publication number
- WO2017197951A1 WO2017197951A1 PCT/CN2017/075090 CN2017075090W WO2017197951A1 WO 2017197951 A1 WO2017197951 A1 WO 2017197951A1 CN 2017075090 W CN2017075090 W CN 2017075090W WO 2017197951 A1 WO2017197951 A1 WO 2017197951A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- occlusion
- real
- virtual
- augmented reality
- scene
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
Definitions
- the present invention relates to the field of virtual display, and in particular to a rendering method of a virtual reality combination in an augmented reality scene, a processing module for executing the rendering method, and an augmented reality glasses including the processing module.
- depth detectors are generally used to superimpose information on real scenes. If the front-back positional relationship between the virtual object and the real object is not taken into consideration when superimposing the virtual information, the position (occlusion) relationship reflected by the transparency of the object in the obtained enhanced image may be inconsistent with the user's conventional experience, causing visual discrepancies. Feeling, affecting the user experience.
- the present invention provides a rendering method for a virtual reality combination in an augmented reality scene, a processing module for performing the rendering method, and an augmented reality glasses including the processing module.
- the occlusion of the virtual scene by the occlusion can be simulated.
- At least one embodiment of the present invention provides a rendering method in an augmented reality scene, where the rendering method includes:
- the step of establishing a virtual model of the real occlusion in the virtual scene comprises:
- the step of obtaining a trellis diagram of the real obstruction comprises:
- a processing module for augmented reality glasses includes:
- a grid map generation unit configured to generate a grid map according to depth information of the real occlusion object
- a virtual scene establishing unit configured to establish a virtual scene, and add a virtual object model to the virtual scene
- An occlusion virtual model establishing unit configured to establish an occlusion virtual model of the real occlusion in the virtual scene, the attribute of the occlusion virtual model being set to absorb light;
- a rendering unit configured to render the virtual object model and the occlusion virtual model according to an attribute of the virtual object model and an attribute of the occlusion virtual model, respectively.
- the virtual model establishing unit includes:
- the coordinate uniform subunit is configured to unify depth information of the real occlusion object into viewpoint coordinates
- model establishing subunit configured to establish an occlusion virtual of the real occlusion in the virtual scene according to a viewpoint coordinate of the real occlusion model.
- the mesh map generating unit is configured to include a scattergram of the real occlusion according to the depth information of the real occlusion, the mesh map generating unit being capable of A scatter plot generates the grid map.
- an augmented reality glasses including a processing module, wherein the processing module is the above-described processing module provided by the present invention.
- the augmented reality glasses further include a depth camera for acquiring a scatter plot of the real occlusion, the depth camera being coupled to an input of the grid map generation unit.
- the rendering method by constructing a virtual model for the real occlusion and importing the occlusion virtual model into the virtual scene, the virtual model of the occlusion is performed through the property of absorbing light. After rendering, there is a virtual model of the occlusion in the virtual scene corresponding to the shape and position of the real occlusion.
- the occlusion virtual model here provides a visual occlusion effect, ie any scenes or objects that are behind the occlusion virtual model are not rendered.
- the real occlusion is overlapped with the occlusion virtual model in the virtual scene, when viewed through the augmented reality eye, the user sees the appearance of the real occlusion, and can experience the occlusion effect conforming to the daily senses, that is, cannot be seen. Any scene or and/or object behind the real occlusion to visually understand the anteroposterior positional relationship between the real occlusion and the virtual object.
- FIG. 1 is a flowchart of a rendering method in an augmented reality scene provided by the present invention
- FIG. 2 is a flow chart when rendering a virtual teapot using the rendering method provided by the present invention
- FIG. 3 is a schematic diagram of a processing module provided by the present invention.
- a rendering method in an augmented reality scene wherein, as shown in FIG. 1, the rendering method includes:
- step S1 and step S2 may be performed at the same time, or step S1 may be performed first and then step S2 may be performed, or step S2 may be performed first and then step S1 may be performed.
- the virtual object model is an object that is virtually displayed in an augmented reality scene.
- the virtual object model is a teapot.
- the real occlusion refers to a real object located in front of the virtual object model in viewpoint coordinates.
- the real obstruction is a human hand.
- the rendering method by creating a virtual model for the real occlusion and importing the occlusion virtual model into the virtual scene, after the occlusion virtual model is rendered by the property of absorbing light, There is a virtual model of the occlusion in the virtual scene corresponding to the shape and position of the real occlusion.
- the occlusion virtual model here provides a visual occlusion effect, ie any scenes or objects that are behind the occlusion virtual model are not rendered.
- the real occlusion is overlapped with the occlusion virtual model in the virtual scene, when viewed through the augmented reality eye, the user sees the appearance of the real occlusion, and can experience the occlusion effect conforming to the daily senses, that is, cannot be seen. Any scene or and/or object behind the real occlusion to visually understand the anteroposterior positional relationship between the real occlusion and the virtual object.
- step S1 there is no particular limitation on how to perform step S1.
- a specific augmented reality application may be selected to implement step S1.
- the virtual object model includes a teapot that can be added to the virtual scene using tools of opengl, unity 3D. The teapot, and adjusting the angle and size of the teapot according to the virtual scene, and performing three-dimensional registration.
- step S3 may include:
- the viewpoint coordinate refers to a coordinate system based on the position of the augmented reality glasses or the eyes of the user.
- step S32 the virtual model of the occlusion of the real occlusion is established in the viewpoint coordinates, and the coordinates are not further converted. Therefore, the method provided by the present invention has higher efficiency.
- step S2 may include:
- a depth camera is provided in the augmented reality glasses, and therefore, a grid map of the real occlusion can be obtained quickly and easily by using the method provided by at least one embodiment of the present invention.
- the virtual object model is a teapot
- the obstruction is manpower
- step S21 the real scene is captured by the depth camera, and the human hand is recognized;
- step S22 a scatter plot (not shown) of the human hand is obtained according to the depth information of the human hand;
- step S23 the scatter plot of the human hand is converted into a grid map of the real occlusion
- step S1 a virtual scene model is established, and a virtual object model (ie, a teapot) is added to the virtual scene;
- a virtual object model ie, a teapot
- step S31 the depth information of the human hand is unified into the viewpoint coordinates
- step S32 the virtual model of the occlusion of the real occlusion is established according to the viewpoint coordinates of the human hand;
- step S4 the occlusion virtual model is rendered according to the properties of the absorbed light.
- a processing module for augmented reality glasses wherein, as shown in FIG. 3, the processing module includes:
- a grid map generating unit 10 configured to generate a grid map according to the depth information of the real occlusion object
- a virtual scene establishing unit 20 configured to create a virtual scene including a virtual object
- An occlusion virtual model establishing unit 30 configured to establish an occlusion virtual model of the real occlusion in the virtual scene
- a rendering unit 40 is configured to set an attribute of the occlusion virtual model to absorb light and render the occlusion virtual model.
- the mesh generating unit 10 is configured to perform step S2, the virtual scene establishing unit 20 is configured to perform step S1, the occlusion virtual model establishing unit 30 is configured to perform step S3, and the rendering unit 40 is configured to perform step S4.
- the virtual model establishing unit 30 includes:
- a coordinate unit sub-unit 31 configured to obtain a viewpoint coordinate of the real occlusion object based on depth information of the real occlusion object;
- the model establishing sub-unit 32 is configured to establish an occlusion virtual model of the real occlusion in the virtual scene according to the viewpoint coordinates of the real occlusion.
- the mesh map generating unit 10 is configured to obtain a scattergram of the real occlusion according to the depth information of the real occlusion, and convert the scattergram into the net Grid.
- an augmented reality glasses including a processing module, wherein the processing module is the above-described processing module provided by the present invention.
- the augmented reality glasses further include a depth camera for acquiring a scatter plot of the real occlusion, the depth camera being coupled to an input of the grid map generation unit.
Abstract
Description
Claims (8)
- 一种增强现实场景中的渲染方法,其中,所述渲染方法包括:创建包括虚拟对象的虚拟场景;获取真实遮挡物的深度信息并生成网格图;在所述虚拟场景中建立真实遮挡物的遮挡物虚拟模型;将所述遮挡物虚拟模型的属性设置为吸收光,并对所述遮挡物虚拟模型进行渲染。
- 根据权利要求1所述的渲染方法,其中,在虚拟场景中建立真实遮挡物的虚拟模型的步骤包括:基于所述真实遮挡物的深度信息获得所述真实遮挡物的视点坐标;根据所述真实遮挡物的视点坐标建立所述真实遮挡物的遮挡物虚拟模型。
- 根据权利要求1或2所述的渲染方法,其中,获取真实遮挡物的网格图的步骤包括:利用深度摄像头对真实场景进行拍摄,并识别出所述真实遮挡物;根据所述真实遮挡物的深度信息获得真实遮挡物的散点图;将所述真实遮挡物的散点图转换成所述真实遮挡物的网格图。
- 一种用于增强现实眼镜的处理模块,其中,所述处理模块包括:网格图生成单元,所述网格图生成单元用于根据真实遮挡物的深度信息生成网格图;虚拟场景建立单元,所述虚拟场景建立单元用于创建包括虚拟对象的虚拟场景;遮挡物虚拟模型建立单元,所述遮挡物虚拟模型建立单元用于 在所述虚拟场景中建立真实遮挡物的遮挡物虚拟模型;渲染单元,所述渲染单元用于将所述遮挡物虚拟模型的属性设置为吸收光,并对所述遮挡物虚拟模型进行渲染。
- 根据权利要求4所述的处理模块,其中,所述虚拟模型建立单元包括:坐标统一子单元,所述坐标统一子单元用于基于所述真实遮挡物的深度信息获得所述真实遮挡物的视点坐标;和模型建立子单元,所述模型建立子单元用于根据所述真实遮挡物的视点坐标在所述虚拟场景中建立所述真实遮挡物的遮挡物虚拟模型。
- 根据权利要求4或5所述的处理模块,其中,所述网格图生成单元用于根据所述真实遮挡物的深度信息获得所述真实遮挡物的散点图,并将所述散点图转换成网格图。
- 一种增强现实眼镜,所述增强现实眼镜包括处理模块,其中,所述处理模块为权利要求4至6中任意一项所述的处理模块。
- 根据权利要求7所述的增强现实眼镜,其中,所述增强现实眼镜还包括用于获取真实遮挡物的散点图的深度摄像头,所述深度摄像头与所述网格图生成单元的输入端相连。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/544,908 US10573075B2 (en) | 2016-05-19 | 2017-02-28 | Rendering method in AR scene, processor and AR glasses |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610339329.XA CN106056663B (zh) | 2016-05-19 | 2016-05-19 | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 |
CN201610339329.X | 2016-05-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017197951A1 true WO2017197951A1 (zh) | 2017-11-23 |
Family
ID=57176625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/075090 WO2017197951A1 (zh) | 2016-05-19 | 2017-02-28 | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10573075B2 (zh) |
CN (1) | CN106056663B (zh) |
WO (1) | WO2017197951A1 (zh) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056663B (zh) | 2016-05-19 | 2019-05-24 | 京东方科技集团股份有限公司 | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 |
WO2018119794A1 (zh) * | 2016-12-28 | 2018-07-05 | 深圳前海达闼云端智能科技有限公司 | 一种显示数据处理方法及装置 |
CN106910240B (zh) * | 2017-01-24 | 2020-04-28 | 成都通甲优博科技有限责任公司 | 一种实时阴影的生成方法及装置 |
CN108694190A (zh) * | 2017-04-08 | 2018-10-23 | 大连万达集团股份有限公司 | 浏览bim模型时消除被观察物体前置遮挡物的操作方法 |
CN109840947B (zh) | 2017-11-28 | 2023-05-09 | 广州腾讯科技有限公司 | 增强现实场景的实现方法、装置、设备及存储介质 |
CN108805985B (zh) * | 2018-03-23 | 2022-02-15 | 福建数博讯信息科技有限公司 | 虚拟空间方法和装置 |
CN108479067B (zh) * | 2018-04-12 | 2019-09-20 | 网易(杭州)网络有限公司 | 游戏画面的渲染方法和装置 |
CN112513969A (zh) * | 2018-06-18 | 2021-03-16 | 奇跃公司 | 集中式渲染 |
CN108830940A (zh) * | 2018-06-19 | 2018-11-16 | 广东虚拟现实科技有限公司 | 遮挡关系处理方法、装置、终端设备及存储介质 |
CN110515463B (zh) * | 2019-08-29 | 2023-02-28 | 南京泛在地理信息产业研究院有限公司 | 一种手势交互式视频场景中基于单目视觉的3d模型嵌入方法 |
WO2021184303A1 (zh) * | 2020-03-19 | 2021-09-23 | 深圳市创梦天地科技有限公司 | 一种视频处理的方法及设备 |
CN111141217A (zh) * | 2020-04-03 | 2020-05-12 | 广东博智林机器人有限公司 | 物体测量方法、装置、终端设备及计算机存储介质 |
CN111443814B (zh) * | 2020-04-09 | 2023-05-05 | 深圳市瑞云科技有限公司 | 一种基于云渲染的ar眼镜系统 |
CN112040596B (zh) * | 2020-08-18 | 2022-11-08 | 张雪媛 | 虚拟空间灯光控制方法、计算机可读存储介质和系统 |
CN111951407A (zh) * | 2020-08-31 | 2020-11-17 | 福州大学 | 一种具有真实位置关系的增强现实模型叠加方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156810A (zh) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | 增强现实实时虚拟试衣系统及方法 |
US20120113140A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | Augmented Reality with Direct User Interaction |
CN103871106A (zh) * | 2012-12-14 | 2014-06-18 | 韩国电子通信研究院 | 利用人体模型的虚拟物拟合方法及虚拟物拟合服务系统 |
US20140240354A1 (en) * | 2013-02-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US20150130790A1 (en) * | 2013-11-14 | 2015-05-14 | Nintendo Of America Inc. | Visually Convincing Depiction of Object Interactions in Augmented Reality Images |
CN106056663A (zh) * | 2016-05-19 | 2016-10-26 | 京东方科技集团股份有限公司 | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4901539B2 (ja) * | 2007-03-07 | 2012-03-21 | 株式会社東芝 | 立体映像表示システム |
CN101727182B (zh) * | 2010-01-28 | 2011-08-10 | 南京航空航天大学 | 头盔式虚拟现实环境中参与者实际手的可视化方法和系统 |
CN102129708A (zh) | 2010-12-10 | 2011-07-20 | 北京邮电大学 | 增强现实环境中快速多层次虚实遮挡处理方法 |
CN102306088A (zh) * | 2011-06-23 | 2012-01-04 | 北京北方卓立科技有限公司 | 一种实体投影虚实配准装置及方法 |
JP5927966B2 (ja) * | 2012-02-14 | 2016-06-01 | ソニー株式会社 | 表示制御装置、表示制御方法、及びプログラム |
CN102722249B (zh) * | 2012-06-05 | 2016-03-30 | 上海鼎为电子科技(集团)有限公司 | 操控方法、操控装置及电子装置 |
US20140192164A1 (en) * | 2013-01-07 | 2014-07-10 | Industrial Technology Research Institute | System and method for determining depth information in augmented reality scene |
CN103489214A (zh) | 2013-09-10 | 2014-01-01 | 北京邮电大学 | 增强现实系统中基于虚拟模型预处理的虚实遮挡处理方法 |
CN103955267B (zh) | 2013-11-13 | 2017-03-15 | 上海大学 | 光透视增强现实系统中双手人机交互方法 |
CN104504671B (zh) | 2014-12-12 | 2017-04-19 | 浙江大学 | 一种用于立体显示的虚实融合图像生成方法 |
JP6317854B2 (ja) * | 2015-03-30 | 2018-04-25 | 株式会社カプコン | 仮想三次元空間生成方法、映像システム、その制御方法およびコンピュータ装置での読み取りが可能な記録媒体 |
-
2016
- 2016-05-19 CN CN201610339329.XA patent/CN106056663B/zh active Active
-
2017
- 2017-02-28 US US15/544,908 patent/US10573075B2/en active Active
- 2017-02-28 WO PCT/CN2017/075090 patent/WO2017197951A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120113140A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | Augmented Reality with Direct User Interaction |
CN102156810A (zh) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | 增强现实实时虚拟试衣系统及方法 |
CN103871106A (zh) * | 2012-12-14 | 2014-06-18 | 韩国电子通信研究院 | 利用人体模型的虚拟物拟合方法及虚拟物拟合服务系统 |
US20140240354A1 (en) * | 2013-02-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US20150130790A1 (en) * | 2013-11-14 | 2015-05-14 | Nintendo Of America Inc. | Visually Convincing Depiction of Object Interactions in Augmented Reality Images |
CN106056663A (zh) * | 2016-05-19 | 2016-10-26 | 京东方科技集团股份有限公司 | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 |
Non-Patent Citations (1)
Title |
---|
WANG, YUTAO: "Research on Collaborate Assembly System Design based on Augmented Reality", SCIENCE -ENGINEERING (A), CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 January 2011 (2011-01-15), pages 15 - 16 and 26-32, ISSN: 1674-0246 * |
Also Published As
Publication number | Publication date |
---|---|
US20180218539A1 (en) | 2018-08-02 |
CN106056663A (zh) | 2016-10-26 |
CN106056663B (zh) | 2019-05-24 |
US10573075B2 (en) | 2020-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017197951A1 (zh) | 增强现实场景中的渲染方法、处理模块和增强现实眼镜 | |
KR102214827B1 (ko) | 증강 현실 제공 장치 및 방법 | |
WO2018058601A1 (zh) | 虚拟与现实融合方法、系统和虚拟现实设备 | |
CN105303557B (zh) | 一种可透视型智能眼镜及其透视方法 | |
CN111880644A (zh) | 多用户即时定位与地图构建(slam) | |
CN106600709A (zh) | 基于装修信息模型的vr虚拟装修方法 | |
WO2019041351A1 (zh) | 一种3d vr视频与虚拟三维场景实时混叠渲染的方法 | |
US20130063560A1 (en) | Combined stereo camera and stereo display interaction | |
CN109598796A (zh) | 将真实场景与虚拟物体进行3d融合显示的方法和装置 | |
US20130127827A1 (en) | Multiview Face Content Creation | |
AU2018249563B2 (en) | System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display | |
JP2006325165A (ja) | テロップ発生装置、テロップ発生プログラム、及びテロップ発生方法 | |
CN106162137A (zh) | 虚拟视点合成方法及装置 | |
WO2015196791A1 (zh) | 双目三维图形渲染方法及相关系统 | |
JP2017016577A (ja) | 情報処理装置、その制御方法及びプログラム | |
CN104599317A (zh) | 一种实现3d扫描建模功能的移动终端及方法 | |
CN104516492A (zh) | 一种基于3d全息投影的人机交互技术 | |
CN103269430A (zh) | 基于bim的三维场景生成方法 | |
CN102695070B (zh) | 一种立体图像的深度一致性融合处理方法 | |
CN207603822U (zh) | 一种裸眼3d展示系统 | |
US10110876B1 (en) | System and method for displaying images in 3-D stereo | |
Kasahara | Headlight: egocentric visual augmentation by wearable wide projector | |
TW201019265A (en) | Auxiliary design system and method for drawing and real-time displaying 3D objects | |
CN107635119B (zh) | 投射方法和设备 | |
CN107038625A (zh) | 一种基于增强现实的楼盘销售用看房导引系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15544908 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17798518 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.03.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17798518 Country of ref document: EP Kind code of ref document: A1 |