WO2018119794A1 - 一种显示数据处理方法及装置 - Google Patents

一种显示数据处理方法及装置 Download PDF

Info

Publication number
WO2018119794A1
WO2018119794A1 PCT/CN2016/112765 CN2016112765W WO2018119794A1 WO 2018119794 A1 WO2018119794 A1 WO 2018119794A1 CN 2016112765 W CN2016112765 W CN 2016112765W WO 2018119794 A1 WO2018119794 A1 WO 2018119794A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
environment model
target
real environment
real
Prior art date
Application number
PCT/CN2016/112765
Other languages
English (en)
French (fr)
Inventor
王恺
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680006930.5A priority Critical patent/CN107223271B/zh
Priority to PCT/CN2016/112765 priority patent/WO2018119794A1/zh
Publication of WO2018119794A1 publication Critical patent/WO2018119794A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • Embodiments of the present invention relate to the field of image processing technologies, and in particular, to a display data processing method and apparatus.
  • VR display devices In many VR applications, users can wear VR display devices to change their position and posture in the virtual space built by the VR display device by moving back and forth in real space, and get better immersion and interactive experience.
  • the virtual space constructed by the VR display device generally covers the user's perspective, so that the user does not see the real scene. This will cause some troubles for the user.
  • the user In the virtual scene, the user is in front of an open space that can move freely, and in front of the real scene is an obstacle that the user does not see, if the user follows the virtual view that he sees. When the scene moves forward, the obstacle will hinder its movement and even lead to some dangerous situations, which will result in a low user experience of the VR display device.
  • Embodiments of the present invention provide a display data processing method and apparatus, which can match features in a real environment in a virtual space acquired by a device, and optimize a user experience.
  • a display data processing method including:
  • a display data processing apparatus including:
  • a processing unit configured to acquire a virtual environment model displayed by the VR device
  • a collecting unit configured to collect data of an environment in which the user wearing the VR device is located, and generate a real environment model by using the processing unit;
  • the processing unit is further configured to match the real environment model and the virtual environment model, when the matching result is that the specified target in the real environment model does not match the virtual object in the virtual environment,
  • the virtual environment model increases the virtual target corresponding to the specified target.
  • a display data processing apparatus comprising: a memory, a communication interface and a processor, the memory and a communication interface coupled to the processor; the memory for storing computer execution code, the processor for executing the computer Execution code control executes any of the above display data processing methods for data transmission of the display data processing device with an external device.
  • a computer storage medium for storing computer software instructions for use in displaying a data processing apparatus, comprising program code designed by any of the above display data processing methods.
  • a computer program product can be directly loaded into an internal memory of a computer and includes software code, and the computer program can be loaded and executed by a computer to implement any of the above display data processing methods.
  • a VR device includes the above display data processing device.
  • the display data processing device acquires a virtual environment model displayed by the VR device; collects data of an environment in which the user wearing the VR device is located, generates a real environment model; and matches the real environment model and the virtual environment model to match
  • the result is that when the specified target in the real environment model does not match the virtual object in the virtual environment, the virtual environment mode is The type increases the virtual target corresponding to the specified target.
  • a virtual target for indicating a specified target in the real environment model is added in the virtual environment model, for example, an obstacle for indicating a real environment model, a person, a traffic facility, etc. are added in the virtual environment model.
  • the virtual target so that the user can avoid the objects in the real environment according to the virtual scene that he sees, avoiding dangerous situations, thereby realizing matching the features in the real environment in the virtual space acquired by the device, and optimizing the user experience.
  • FIG. 1 is a flowchart of a method for processing display data according to an embodiment of the present application
  • 2-4 is a schematic diagram of a process of a display data processing method provided by an embodiment of the present application.
  • FIG. 5 is a structural diagram of a display data processing apparatus according to an embodiment of the present application.
  • FIG. 6 is a structural diagram of a display data processing apparatus according to another embodiment of the present application.
  • FIG. 6B is a structural diagram of a display data processing apparatus according to still another embodiment of the present application.
  • the solution provided by the embodiments of the present application can be applied to the following VR devices, such as smart helmets, eye masks, and the like.
  • the display data processing device provided by the embodiment of the present application may be a VR device itself or a functional entity in the VR device. Based on the above product form, as shown in FIG. 1 , an embodiment of the present application provides a display data processing method, including the following steps:
  • the virtual environment model can be pre-built by the developer for viewing by the user wearing the VR device.
  • the data of the environment in which the user wearing the VR device is located may be collected by an image sensor in an offline or online manner.
  • the first way is: directly fixing the image sensor (such as a depth camera, a binocular camera, etc.) to the VR device or the VR positioning device; the other way is to fix the image sensor in the real environment. Anywhere.
  • the specific step 103 includes the following steps:
  • S1 Binding a coordinate system of the virtual environment model to a coordinate system of the real environment model.
  • the real environment model is constructed according to a preset algorithm after the data of the environment in which the user is located is collected by the image sensor.
  • the image sensor is set in the first mode, since the image sensor is fixed with the VR device or the VR positioning device, the coordinate system of the real environment model is directly calibrated to the coordinate system of the virtual environment model in the matching process, so that both can be realized.
  • the real environment model is generated by using the collected data of the environment in which the user is located according to the position and the viewing angle of the user.
  • the virtual environment model is usually a virtual reality environment according to the user's position and perspective direction, it is possible to achieve a correspondence between the two locations.
  • the virtual target that satisfies the predetermined condition with the approximate value of the specified target is retrieved.
  • the approximate value satisfies the predetermined condition and the approximate value may be less than the predetermined threshold.
  • the generated real environment model may contain material information of the specified target, so that the material information may be retrieved when the virtual target of the specified target is retrieved, and when the local server does not retrieve the virtual target of the specified target approximation, Cloud database retrieval. If the virtual target whose approximate value of the specified target satisfies the predetermined condition is not retrieved, the specified target is used as the virtual target.
  • the size ratio of the virtual environment model is used to represent a scaling relationship between the virtual object in the virtual environment model and the corresponding real object. If the size ratio of the real environment model and the virtual environment model is inconsistent, or the size ratio of the retrieved virtual target to the virtual environment model is inconsistent, when the virtual target is copied into the virtual environment model, the real environment model cannot be presented to the user. Specifying the real state of the target, so before step S4, the virtual target needs to be scaled according to the size of the virtual environment model. Illustratively, if the virtual target is the same size as the specified target, the real target needs to be in the real environment according to the specified target.
  • the scale of the model set the proportion of the virtual target in the virtual environment model; if the virtual target is not the same size as the specified target, then the virtual target needs to be adjusted to the same size ratio as the specified target, and then the specified target is true.
  • the proportion of the real-world model setting the proportion of the virtual target in the virtual environment model.
  • the solution further includes a process of fusing the virtual target with other virtual objects in the virtual environment model to enhance the user experience, and the solution further includes the following steps:
  • the virtual target is fused with the other virtual objects.
  • high-level semantic information such as the category attribute of the virtual target
  • the process may involve deforming the shape of the virtual target, ensuring that the virtual target is connected to other virtual objects in the periphery and ensuring geometric continuity of the joint; the material of the surrounding object is merged with the material of the surrounding object through texture synthesis, and finally the modified
  • the virtual environment model is output to the display device of the VR device to achieve a better VR experience for the user.
  • the user views the virtual environment model M1 through the VR device.
  • the user if the user is used in the living room of the home as shown in FIG. 3, other family members may be active in the living room at the same time, or items such as tables, chairs, and sofas may be placed in the living room, during the user's movement, due to the user. People and items in the living room cannot be seen in the M1, so other family members and items that may be placed in the living room may pose a danger to the user's movement.
  • the image in the living room is collected by the image sensor to form a real environment model M2 of the living room environment.
  • the M2 includes a coffee table Mw, and after binding the virtual environment model M1 to the coordinate system of the real environment model M2, the person Moving in the living room is equivalent to moving in M1.
  • the coffee table Mw collides, so it is necessary to set a virtual object in M1 corresponding to the coffee table Mw in the position of M2 to prompt the user to avoid the position.
  • a virtual item similar to the coffee table Mw can be retrieved, such as a virtual item of similar material or size copied to the corresponding position of the coffee table Mw in the M2 in M1, as shown in FIG. 4, the table in the middle is added to the M1. .
  • the table case needs to be scaled and merged with the surrounding virtual objects in M1.
  • the display data processing device acquires the virtual environment model displayed by the VR device; collects data of the environment in which the user wearing the VR device is located, generates a real environment model; and matches And the virtual environment model, when the matching result is that the specified target in the real environment model does not match the virtual object in the virtual environment, adding the specified target corresponding to the virtual environment model Virtual target.
  • a virtual target for indicating a specified target in the real environment model is added in the virtual environment model, for example, an obstacle for indicating a real environment model, a person, a traffic facility, etc. are added in the virtual environment model.
  • the virtual target so that the user can avoid the objects in the real environment according to the virtual scene that he sees, avoiding dangerous situations, thereby realizing matching the features in the real environment in the virtual space acquired by the device, and optimizing the user experience.
  • the display data processing device implements the functions provided by the above embodiments through the hardware structure and/or software modules it contains.
  • the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application may divide the function module by the display data processing device according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 5 is a schematic diagram showing a possible structure of the display data processing apparatus involved in the foregoing embodiment.
  • the display data processing apparatus includes: a processing unit 21 and an acquisition unit 22. .
  • the processing unit 21 is configured to perform the steps 101, 103-104, and S1-S5 in the foregoing method embodiment.
  • the collecting unit 22 is configured to collect data of the environment in which the user wearing the VR device is located and submit the data to the processing unit 21 to generate a real environment. model. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 6A is a schematic diagram showing the possible structure of an electronic device involved in the embodiment of the present application.
  • the electronic device includes a communication module 31 and a processing module 32.
  • Processing module 32 is used Controlling the operation of the display data processing device, for example, the processing module 32 is configured to support the display data processing device executing the method performed by the processing unit 21.
  • the communication module 31 is configured to support data transmission between the electronic device and other devices, and implements a method performed by the acquisition unit 22.
  • the electronic device may further include a storage module 33 for storing program codes and data of the display data processing device, such as a method performed by the storage processing unit 21.
  • the processing module 32 may be a processor or a controller, and may be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific). Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 81 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module can be a memory.
  • the electronic device When the processing module 32 is a processor, the communication module 31 is a communication interface, and the storage module 33 is a memory, the electronic device according to the embodiment of the present application may be the electronic device shown in FIG. 6B.
  • the electronic device includes a processor 41, a communication interface 42, a memory 43, a memory 43 and a communication interface 42 coupled to the processor 41.
  • the communication interface 42, the processor 41, and the memory 43 are connected to each other by a bus 44;
  • the memory 43 is for storing computer execution code, and the processor 41 is configured to execute the computer to execute code control to execute any of the above display data processing methods
  • the communication interface 42 is used to display data transmission between the data processing device and an external device.
  • the bus 44 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 6B, but it does not mean that there is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, which can be stored in random access memory (Random) Access Memory, RAM, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), A register, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Abstract

一种显示数据处理方法及装置,涉及图像处理技术领域,能够在设备获取的虚拟空间中匹配真实环境中的特征,优化用户体验。该方法包括:获取VR设备显示的虚拟环境模型(101);采集佩戴所述VR设备的用户所处环境的数据,生成真实环境模型(102);匹配真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标(103);确定所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置周围包含其他虚拟物体时,则将所述虚拟目标与所述其他虚拟物体做融合处理(104)。

Description

一种显示数据处理方法及装置 技术领域
本发明的实施例涉及图像处理技术领域,尤其涉及一种显示数据处理方法及装置。
背景技术
近年来,虚拟现实(Virtual Reality,简称:VR)产品已应用于各个领域。
在众多的VR应用中,用户可以穿戴VR显示设备通过在真实空间中来回走动,来改变自己在VR显示设备构建的虚拟空间中的位置和姿态,得到更好的沉浸感和交互体验。然而出于增强用户浸入感的需要,VR显示设备构建的虚拟空间一般会覆盖住用户的视角,使用户看不到真实场景。这样会给用户带来一定困扰,比如,在虚拟场景中用户面前是一片可以自由移动的空旷地带,而真实场景中其面前是一个用户没看到的障碍物,若用户按照自己看到的虚拟场景向前移动,障碍物则会使其移动受阻,甚至导致一些危险情况的发生,因而造成了VR显示设备的用户体验不高。
发明内容
本发明的实施例提供一种显示数据处理方法及装置,能够在设备获取的虚拟空间中匹配真实环境中的特征,优化用户体验。
第一方面,提供一种显示数据处理方法,包括:
获取VR设备显示的虚拟环境模型;
采集佩戴所述VR设备的用户所处环境的数据,生成真实环境模型;
匹配所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。
第二方面,提供一种显示数据处理装置,包括:
处理单元,用于获取VR设备显示的虚拟环境模型;
采集单元,用于采集佩戴所述VR设备的用户所处环境的数据,并通过所述处理单元生成真实环境模型;
所述处理单元,还用于匹配所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。
第三方面,提供一种显示数据处理装置,包括:存储器、通信接口和处理器,存储器以及通信接口耦合至处理器;所述存储器用于存储计算机执行代码,所述处理器用于执行所述计算机执行代码控制执行上述任一显示数据处理方法,所述通信接口用于所述显示数据处理装置与外部设备的数据传输。
第四方面,提供一种计算机存储介质,用于储存为显示数据处理装置所用的计算机软件指令,其包含上述任一显示数据处理方法所设计的程序代码。
第五方面,提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现上述任一显示数据处理方法。
第六方面,一种VR设备,包括上述的显示数据处理装置。
在上述方案中,显示数据处理装置获取VR设备显示的虚拟环境模型;采集佩戴VR设备的用户所处环境的数据,生成真实环境模型;匹配所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模 型增加所述指定目标对应的虚拟目标。通过上述的技术方案,由于在虚拟环境模型中增加了用于指示真实环境模型中指定目标的虚拟目标,例如在虚拟环境模型中增加用于指示真实环境模型中的障碍物、人物、交通设施等的虚拟目标,这样用户按照自己看到的虚拟场景移动是可以避开真实环境中的物体,避免发生危险情况,从而实现了在设备获取的虚拟空间中匹配真实环境中的特征,优化用户体验。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请的实施例提供的一种显示数据处理方法的流程图;
图2-4本申请的实施例提供的一种显示数据处理方法的过程示意图;
图5为本申请的实施例提供的一种显示数据处理装置的结构图;
图6A为本申请的另一实施例提供的一种显示数据处理装置的结构图;
图6B为本申请的又一实施例提供的一种显示数据处理装置的结构图。
具体实施方式
本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
需要说明的是,本申请实施例中,“的(英文:of)”,“相应的(英文:corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
本申请的实施例提供的方案可应用于如下VR设备,如智能头盔、眼罩等产品。本申请的实施例提供的显示数据处理装置可以为VR设备本身或者VR设备中的功能实体。基于上述的产品形式,参照图1所示,本申请的实施例提供一种显示数据处理方法,包括如下步骤:
101、获取VR设备显示的虚拟环境模型。
其中,该虚拟环境模型可以为开发人员预先构建的,用于供佩戴VR设备的用户观看的。
102、采集佩戴所述VR设备的用户所处环境的数据,生成真实环境模型。
本方案中对步骤101和102的顺序并不做限定,其中在步骤102中,可以通过离线或在线方式通过图像传感器采集佩戴所述VR设备的用户所处环境的数据。关于图像传感器的设置,第一种方式为:直接将图像传感器(例如深度摄像头、双目摄像头等)与VR设备或者VR定位设备固定;另一种方式为将图像传感器固定设置于真实环境中的任意位置。
103、匹配真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。
具体的步骤103包括如下步骤:
S1、将所述虚拟环境模型的坐标系与所述真实环境模型的坐标系绑定。
其中,通过图像传感器采集到用户所处环境的数据后根据预设算法构建真实环境模型。为了实现在步骤103中真实环境模型与所述虚拟环境模型进行匹配,需要绑定两者的坐标系使得两者在位置上形成对应关系; 因此若采用第一种方式设置图像传感器,由于图像传感器与VR设备或者VR定位设备固定,因此匹配过程中直接将真实环境模型的坐标系标定为虚拟环境模型的坐标系,即可实现两者在位置上形成对应关系;若采用第二种方式设置图像传感器,则需要在步骤102中根据用户的位置和视角方向,利用采集的所述用户所处环境的数据生成真实环境模型。这样由于虚拟环境模型通常是按照用户位置和视角方向虚拟的现实环境,因此可以实现两者在位置上形成对应关系。
S2、检测所述真实环境模型中的指定目标。
S3、确认所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应位置未包含虚拟物体时,则检索与所述指定目标的近似值满足预定条件的虚拟目标。
其中在步骤S3中,近似值满足预定条件可以为近似值小于预定阈值。生成的真实环境模型可以包含指定目标的材质信息,这样,在检索指定目标近似的虚拟目标时可以按照材质信息进行检索,此外,在本地服务器检索不到指定目标近似的虚拟目标时,也可以通过云端数据库检索。若未检索到与指定目标的近似值满足预定条件的虚拟目标,则将所述指定目标用作虚拟目标。
S4、将所述虚拟目标复制到所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置。
其中,虚拟环境模型的尺寸比例用于表示虚拟环境模型中的虚拟物体与对应的真实物体的缩放比例关系。若真实环境模型与虚拟环境模型的尺寸比例不一致,或者检索到的虚拟目标与虚拟环境模型的尺寸比例不一致,则将虚拟目标复制到虚拟环境模型中时,并不能真实向用户提示真实环境模型中指定目标的真实状态,因此在步骤S4之前,需要按照所述虚拟环境模型的尺寸比例缩放所述虚拟目标,示例性的,若虚拟目标与指定目标的尺寸相同,则需要按照指定目标在真实环境模型的比例,设置虚拟目标在虚拟环境模型中的比例;若虚拟目标与指定目标的尺寸不相同,则需要将虚拟目标调整为与指定目标为相同尺寸比例后,按照指定目标在真 实环境模型的比例,设置虚拟目标在虚拟环境模型中的比例。
进一步的,该方案还包括将虚拟目标与虚拟环境模型中的其他虚拟物体的融合过程,以使得用户体验增强,这样该方案还包括如下步骤:
104、确定所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置周围包含其他虚拟物体时,则将所述虚拟目标与所述其他虚拟物体做融合处理。
在步骤104中的融合处理过程中,可同时考虑高级语义信息,如虚拟目标的类别属性。该过程中可能涉及将虚拟目标的形状做变形,保证虚拟目标于周边的其他虚拟物体相连接并保证连接处几何连续;将其材质与周围物体的材质通过纹理合成进行融合,最后将修改后的虚拟环境模型输出至VR设备的显示设备,以实现用户更好的VR体验。
如图2所示,用户通过VR设备观看虚拟环境模型M1。而如果用户在如图3所示的家庭的客厅中使用时,可能会有其他家庭成员同时在客厅中活动,或者客厅中可能摆放有桌椅沙发等物品,在用户移动过程中,由于用户在M1中并不能看到客厅中的人和物品,因此其他家庭成员以及客厅中可能摆放的物品都有可能会对用户的移动造成危险。本申请中通过图像传感器采集客厅中的数据,形成客厅环境的真实环境模型M2,示例性的,M2中包含茶几Mw,在将虚拟环境模型M1与真实环境模型M2的坐标系绑定后,人在客厅中移动相当于在M1中移动,在将M1与M2匹配过程中,如果发现M1中与茶几Mw在M2中对应的位置处没有虚拟物体,则由于用户走到对应位置时可能会与客厅中的茶几Mw相撞,因此需要在M1中与茶几Mw在M2中对应的位置设置虚拟物体,以提示用户避开该位置。这样可以检索一个与茶几Mw类似的虚拟物品,如材质或规格类似的虚拟物品复制到M1中与茶几Mw在M2中对应的位置,如图4所示,在M1中增加了位于中部的桌案。为使得用户体验更好还需要将桌案进行缩放,并在M1中与其周围的虚拟物体进行融合处理。
在上述方案中,显示数据处理装置获取VR设备显示的虚拟环境模型;采集佩戴VR设备的用户所处环境的数据,生成真实环境模型;匹配 所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。通过上述的技术方案,由于在虚拟环境模型中增加了用于指示真实环境模型中指定目标的虚拟目标,例如在虚拟环境模型中增加用于指示真实环境模型中的障碍物、人物、交通设施等的虚拟目标,这样用户按照自己看到的虚拟场景移动是可以避开真实环境中的物体,避免发生危险情况,从而实现了在设备获取的虚拟空间中匹配真实环境中的特征,优化用户体验。
可以理解的是,显示数据处理装置通过其包含的硬件结构和/或软件模块实现上述实施例提供的功能。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对显示数据处理装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图5示出了上述实施例中所涉及的显示数据处理装置的一种可能的结构示意图,显示数据处理装置包括:处理单元21、采集单元22。处理单元21用于执行上述方法实施例中的步骤101、103-104、S1-S5,采集单元22用于采集佩戴所述VR设备的用户所处环境的数据并交于处理单元21生成真实环境模型。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
图6A示出了本申请的实施例中所涉及的一种电子设备可能的结构示意图。电子设备包括:通信模块31和处理模块32。处理模块32用于 对显示数据处理装置的动作进行控制管理,例如,处理模块32用于支持显示数据处理装置执行处理单元21执行的方法。通信模块31用于支持电子设备与其他设备的数据传输,实施采集单元22执行的方法。电子设备还可以包括存储模块33,用于存储显示数据处理装置的程序代码和数据,例如存储处理单元21执行的方法。
其中,处理模块32可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块81可以是收发器、收发电路或通信接口等。存储模块可以是存储器。
当处理模块32为处理器,通信模块31为通信接口,存储模块33为存储器时,本申请实施例所涉及的电子设备可以为图6B所示的电子设备。
参阅图6B所示,该电子设备包括:处理器41、通信接口42、存储器43,存储器43以及通信接口42耦合至处理器41。例如,通信接口42、处理器41以及存储器43通过总线44相互连接;存储器43用于存储计算机执行代码,处理器41用于执行所述计算机执行代码控制执行上述任一显示数据处理方法,通信接口42用于显示数据处理装置与外部设备的数据传输。总线44可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图6B中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
结合本申请公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random  Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (15)

  1. 一种显示数据处理方法,其特征在于,包括:
    获取VR设备显示的虚拟环境模型;
    采集佩戴所述VR设备的用户所处环境的数据,生成真实环境模型;
    匹配所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。
  2. 根据权利要求1所述的方法,其特征在于,所述生成真实环境模型包括:根据用户的位置和视角方向,利用采集的所述用户所处环境的数据生成所述真实环境模型。
  3. 根据权利要求1所述的方法,其特征在于,匹配所述真实环境模型与所述虚拟环境模型,当匹配结果所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标,包括:
    将所述虚拟环境模型的坐标系与所述真实环境模型的坐标系绑定;
    检测所述真实环境模型中的指定目标;
    确认所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应位置未包含虚拟物体时,则检索与所述指定目标的近似值满足预定条件的虚拟目标;
    将所述虚拟目标复制到所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置。
  4. 根据权利要求3所述的方法,其特征在于,若未检索到与所述指定目标的近似值满足预定条件的虚拟目标,则将所述指定目标用作所述虚拟目标。
  5. 根据权利要求3或4所述的方法,其特征在于,将所述虚拟目标复制到所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置之前,还包括:
    按照所述虚拟环境模型的尺寸比例缩放所述虚拟目标。
  6. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    确定所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置周围包含其他虚拟物体时,则将所述虚拟目标与所述其他虚拟物体做融合处理。
  7. 一种显示数据处理装置,其特征在于,包括:
    处理单元,用于获取VR设备显示的虚拟环境模型;
    采集单元,用于采集佩戴所述VR设备的用户所处环境的数据,并通过所述处理单元生成真实环境模型;
    所述处理单元,还用于匹配所述真实环境模型与所述虚拟环境模型,当匹配结果为所述真实环境模型中的指定目标在所述虚拟环境中未匹配到虚拟物体时,在所述虚拟环境模型增加所述指定目标对应的虚拟目标。
  8. 根据权利要求7所述的装置,其特征在于,所述处理单元具体用于根据用户的位置和视角方向,利用采集的所述用户所处环境的数据生成所述真实环境模型。
  9. 根据权利要求7所述的装置,其特征在于,所述处理单元具体用于将所述虚拟环境模型的坐标系与所述真实环境模型的坐标系绑定;检测所述真实环境模型中的指定目标;确认所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应位置未包含虚拟物体时,则检索与所述指定目标的近似值满足预定条件的虚拟目标;将所述虚拟目标复制到所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置。
  10. 根据权利要求8所述的装置,其特征在于,所述处理单元还用于 若未检索到与所述指定目标的近似值满足预定条件的虚拟目标,则将所述指定目标用作所述虚拟目标。
  11. 根据权利要求9或10所述的装置,其特征在于,所述处理单元还用于按照所述虚拟环境模型的尺寸比例缩放所述虚拟目标。
  12. 根据权利要求9或10所述的装置,其特征在于,所述处理单元还用于确定所述虚拟环境模型中与所述指定目标在所述真实环境模型的位置对应的位置周围包含其他虚拟物体时,则将所述虚拟目标与所述其他虚拟物体做融合处理。
  13. 一种电子设备,其特征在于,包括:存储器、通信接口和处理器,存储器以及通信接口耦合至处理器;所述存储器用于存储计算机执行代码,所述处理器用于执行所述计算机执行代码控制执行权利要求1至6任一项所述的方法,所述通信接口用于所述显示数据处理装置与外部设备的数据传输。
  14. 一种计算机存储介质,其特征在于,用于储存为显示数据处理装置所用的计算机软件指令,其包含执行权利要求1~6任一项所述的显示数据处理方法所设计的程序代码。
  15. 一种计算机程序产品,其特征在于,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现权利要求1~6任一项所述显示数据处理方法。
PCT/CN2016/112765 2016-12-28 2016-12-28 一种显示数据处理方法及装置 WO2018119794A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680006930.5A CN107223271B (zh) 2016-12-28 2016-12-28 一种显示数据处理方法及装置
PCT/CN2016/112765 WO2018119794A1 (zh) 2016-12-28 2016-12-28 一种显示数据处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/112765 WO2018119794A1 (zh) 2016-12-28 2016-12-28 一种显示数据处理方法及装置

Publications (1)

Publication Number Publication Date
WO2018119794A1 true WO2018119794A1 (zh) 2018-07-05

Family

ID=59928211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112765 WO2018119794A1 (zh) 2016-12-28 2016-12-28 一种显示数据处理方法及装置

Country Status (2)

Country Link
CN (1) CN107223271B (zh)
WO (1) WO2018119794A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060355A (zh) * 2019-04-29 2019-07-26 北京小米移动软件有限公司 界面显示方法、装置、设备及存储介质
CN112199749A (zh) * 2020-10-13 2021-01-08 珠海格力电器股份有限公司 空调选型系统、方法、电子设备及存储介质
CN112416132A (zh) * 2020-11-27 2021-02-26 上海影创信息科技有限公司 Vr眼镜应用程序启动条件检测的方法和系统及其vr眼镜
CN113485548A (zh) * 2021-06-18 2021-10-08 青岛小鸟看看科技有限公司 头戴显示设备的模型加载方法、装置及头戴显示设备
CN115272630A (zh) * 2022-09-29 2022-11-01 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质
CN115328316A (zh) * 2022-08-24 2022-11-11 中国科学院半导体研究所 基于vr技术的元宇宙物体材质构建方法及装置

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090966B (zh) * 2017-12-13 2021-06-01 广州市和声信息技术有限公司 一种适用于虚拟场景的虚拟物体重构方法和系统
WO2020083944A1 (en) * 2018-10-22 2020-04-30 Unity IPR ApS Method and system for addressing and segmenting portions of the real world for visual digital authoring in a mixed reality environment
CN111199583B (zh) * 2018-11-16 2023-05-16 广东虚拟现实科技有限公司 一种虚拟内容显示方法、装置、终端设备及存储介质
CN109903395B (zh) * 2019-03-01 2021-07-09 Oppo广东移动通信有限公司 模型的处理方法、装置、存储介质及电子设备
CN111766937A (zh) * 2019-04-02 2020-10-13 广东虚拟现实科技有限公司 虚拟内容的交互方法、装置、终端设备及存储介质
CN111462340B (zh) * 2020-03-31 2023-08-29 歌尔科技有限公司 Vr显示方法、设备及计算机存储介质
CN112738498B (zh) * 2020-12-24 2023-12-08 京东方科技集团股份有限公司 一种虚拟游览系统及方法
CN113034668B (zh) * 2021-03-01 2023-04-07 中科数据(青岛)科技信息有限公司 一种ar辅助的机械模拟操作方法和系统
CN114385000A (zh) * 2021-11-30 2022-04-22 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844714A (zh) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 基于增强现实的场景显示方法及系统
CN105955455A (zh) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 一种在虚拟场景中添加对象的装置和方法
CN106056663A (zh) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 增强现实场景中的渲染方法、处理模块和增强现实眼镜
CN106133582A (zh) * 2013-10-03 2016-11-16 舒朗科技公司 在头戴显示设备内结合实景图像流之方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1168057C (zh) * 1996-08-14 2004-09-22 挪拉赫梅特·挪利斯拉莫维奇·拉都包夫 追踪并显示使用者在空间的位置与取向的方法,向使用者展示虚拟环境的方法以及实现这些方法的系统
CN102930447A (zh) * 2012-10-22 2013-02-13 广州新节奏数码科技有限公司 一种虚拟穿戴的实现方法及设备
CN203070360U (zh) * 2012-10-22 2013-07-17 广州新节奏数码科技有限公司 一种虚拟穿戴设备
CN103475893B (zh) * 2013-09-13 2016-03-23 北京智谷睿拓技术服务有限公司 三维显示中对象的拾取装置及三维显示中对象的拾取方法
CN104599243B (zh) * 2014-12-11 2017-05-31 北京航空航天大学 一种多视频流与三维场景的虚实融合方法
CN106125938B (zh) * 2016-07-01 2021-10-22 联想(北京)有限公司 一种信息处理方法及电子设备
CN106249883B (zh) * 2016-07-26 2019-07-30 努比亚技术有限公司 一种数据处理方法及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106133582A (zh) * 2013-10-03 2016-11-16 舒朗科技公司 在头戴显示设备内结合实景图像流之方法及系统
CN105844714A (zh) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 基于增强现实的场景显示方法及系统
CN105955455A (zh) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 一种在虚拟场景中添加对象的装置和方法
CN106056663A (zh) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 增强现实场景中的渲染方法、处理模块和增强现实眼镜

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060355A (zh) * 2019-04-29 2019-07-26 北京小米移动软件有限公司 界面显示方法、装置、设备及存储介质
CN110060355B (zh) * 2019-04-29 2023-05-23 北京小米移动软件有限公司 界面显示方法、装置、设备及存储介质
CN112199749A (zh) * 2020-10-13 2021-01-08 珠海格力电器股份有限公司 空调选型系统、方法、电子设备及存储介质
CN112416132A (zh) * 2020-11-27 2021-02-26 上海影创信息科技有限公司 Vr眼镜应用程序启动条件检测的方法和系统及其vr眼镜
CN113485548A (zh) * 2021-06-18 2021-10-08 青岛小鸟看看科技有限公司 头戴显示设备的模型加载方法、装置及头戴显示设备
CN113485548B (zh) * 2021-06-18 2024-02-23 青岛小鸟看看科技有限公司 头戴显示设备的模型加载方法、装置及头戴显示设备
CN115328316A (zh) * 2022-08-24 2022-11-11 中国科学院半导体研究所 基于vr技术的元宇宙物体材质构建方法及装置
CN115272630A (zh) * 2022-09-29 2022-11-01 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质
CN115272630B (zh) * 2022-09-29 2022-12-23 南方科技大学 数据处理方法、装置、虚拟现实眼镜及存储介质

Also Published As

Publication number Publication date
CN107223271A (zh) 2017-09-29
CN107223271B (zh) 2021-10-15

Similar Documents

Publication Publication Date Title
WO2018119794A1 (zh) 一种显示数据处理方法及装置
US11200617B2 (en) Efficient rendering of 3D models using model placement metadata
US11494995B2 (en) Systems and methods for virtual and augmented reality
CN104346834B (zh) 信息处理设备和位置指定方法
KR101636027B1 (ko) 실세계 객체들의 3d 모델들 및 실제 스케일 메타데이터를 캡처하고 이동시키기 위한 방법들 및 시스템들
CN107358217B (zh) 一种视线估计方法及装置
JP6225538B2 (ja) 情報処理装置、システム、情報提供方法および情報提供プログラム
JP2021527877A (ja) 3次元人体姿勢情報の検出方法および装置、電子機器、記憶媒体
US11615489B2 (en) System for providing removals simulation using virtual reality and augmented reality and brokering real estate therethrough
US10147240B2 (en) Product image processing method, and apparatus and system thereof
US20180374236A1 (en) Apparatus for determination of interference between virtual objects, control method of the apparatus, and storage medium
CN113610981A (zh) 脸部模型生成方法、交互方法及相关装置
US20190318537A1 (en) Three-dimensional model constructing method, apparatus, and system
WO2023024441A1 (zh) 模型重建方法及相关装置、电子设备和存储介质
CN108628442B (zh) 一种信息提示方法、装置以及电子设备
GB2559850A (en) Stroke operation prediction for three-dimensional digital content
CN115515487A (zh) 基于使用多视图图像的3d人体姿势估计的基于视觉的康复训练系统
CN109740511B (zh) 一种人脸表情匹配方法、装置、设备及存储介质
CN111028359B (zh) 增强现实服务配置、请求方法、装置、设备和介质
WO2019134501A1 (zh) 模拟用户试装的方法、装置、存储介质及移动终端
Chu et al. A cloud service framework for virtual try-on of footwear in augmented reality
Lee et al. Robust multithreaded object tracker through occlusions for spatial augmented reality
US20230206530A1 (en) Importation and transformation tool for utilizing computer-aided design files in a web browser or customized client interface
CN113112613B (zh) 模型显示方法、装置、电子设备和存储介质
US20230209171A1 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925283

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16925283

Country of ref document: EP

Kind code of ref document: A1