WO2019241925A1 - 虚拟现实vr数据处理方法、装置及存储介质 - Google Patents

虚拟现实vr数据处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2019241925A1
WO2019241925A1 PCT/CN2018/091945 CN2018091945W WO2019241925A1 WO 2019241925 A1 WO2019241925 A1 WO 2019241925A1 CN 2018091945 W CN2018091945 W CN 2018091945W WO 2019241925 A1 WO2019241925 A1 WO 2019241925A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual
terminal device
geographic location
network device
Prior art date
Application number
PCT/CN2018/091945
Other languages
English (en)
French (fr)
Inventor
赵其勇
贾伟杰
王娟娟
季莉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/091945 priority Critical patent/WO2019241925A1/zh
Publication of WO2019241925A1 publication Critical patent/WO2019241925A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the present application relates to the field of virtual reality technology, and in particular, to a virtual reality (Virtual Reality, VR for short) data processing method, device, and system.
  • virtual reality Virtual Reality, VR for short
  • Virtual Reality is a display technology capable of displaying virtual content through computer simulation. As the most popular display technology in recent years, it has been sought after by manufacturers and users.
  • Computer Graphics Virtual Reality is a further development of VR technology. CG VR can actively create user perspectives, allowing users wearing terminal equipment to move around in virtual scenes, making users wearing terminal equipment extremely high. Immersive, you can get 6 degrees of freedom experience.
  • a computer In the existing CG VR technology, a computer generates virtual games, modeling, virtual social and other virtual data in real time according to the motion posture of the terminal device, and transmits the virtual data to the terminal device through a cable. After receiving the virtual data, the terminal device displays the virtual content corresponding to the virtual data to the user.
  • the materials used for computer rendering to generate virtual content are fixed. Therefore, the virtual data transmitted from the computer to the terminal device is also fixed, which results in that the terminal device presents a single virtual content to the user.
  • the present application combines various embodiments to provide a method, a device, and a storage medium for processing virtual display VR data, so as to facilitate the enrichment of the virtual content presented by the VR to the user.
  • a first aspect of the present application provides a virtual reality VR data processing method, including:
  • the determining the first VR data corresponding to the first geographic location includes:
  • the method further includes:
  • the rendering the first virtual material to obtain the first VR data includes:
  • the preference information indicates that the first virtual material is rendered according to a preset category, rendering the first virtual material according to the preset category to obtain the first VR data.
  • the determining a first virtual material corresponding to the first geographic location includes:
  • the first virtual material corresponding to the first geographical location is obtained from a storage device, where the storage device stores virtual materials corresponding to N geographical locations, where N is a positive integer.
  • the acquiring the first virtual material corresponding to the first geographic location from a storage device includes:
  • the virtual material corresponding to the first geographical location does not exist in the storage device, obtaining the second virtual material corresponding to the second geographical location from the storage device, wherein the range of the second geographical location A range greater than the first geographic location.
  • the first virtual material is a panoramic material
  • rendering the first virtual material to obtain first VR data includes:
  • the first virtual material is a specific material
  • rendering the first virtual material to obtain first virtual reality VR data includes:
  • the fixed material added with the specific material is rendered to obtain the first VR data.
  • the sending the first VR data to the terminal device includes:
  • the terminal acquires the first geographic position of the terminal, and determines and sends the corresponding first VR data to the terminal device according to the first geographic position, so as to provide the terminal device with Provide VR data corresponding to the first location where the terminal device is located.
  • the terminal device can display different VR data corresponding to the first position where the terminal device is located in different geographical locations. This enriches the virtual content provided by VR to the user's terminal device.
  • a second aspect of the present application provides a virtual reality VR data processing method, including:
  • the method further includes:
  • the receiving the first VR data sent by the network device includes:
  • the displaying content corresponding to the first VR data includes:
  • the first VR data determined by the network device according to the first geographic position is received by acquiring the first geographic position where the terminal device is located and sending the first geographic position to the network device. And display the content corresponding to the first VR data.
  • the terminal device can display different VR data corresponding to the first position where the terminal device is located in different geographical locations. This enriches the virtual content provided by VR to the user's terminal device.
  • a third aspect of the present application provides a virtual reality VR data processing device, including:
  • An acquisition module configured to acquire a first geographic location of a terminal device
  • a determining module configured to determine first virtual reality VR data corresponding to the first geographic location
  • a sending module configured to send the first VR data to the terminal device, wherein the content corresponding to the first VR data is used for displaying on the terminal device.
  • the determining module is specifically configured to:
  • the obtaining module is further configured to obtain preference information sent by the terminal device, where the preference information is used to indicate whether to render the first virtual material according to a preset category;
  • the determining module is specifically configured to: if the preference information indicates that the first virtual material is rendered according to a preset category, render the first virtual material according to the preset category to obtain the first VR data.
  • the determining module is specifically configured to:
  • the first virtual material corresponding to the first geographical location is obtained from a storage device, where the storage device stores virtual materials corresponding to N geographical locations, where N is a positive integer.
  • the determining module is specifically configured to: if the virtual material corresponding to the first geographic location does not exist in the storage device, obtain the second material from the storage device The second virtual material corresponding to the geographical position, wherein a range of the second geographical position is larger than a range of the first geographical position.
  • the first virtual material is a panoramic material
  • the determining module is specifically configured to render the panoramic material to obtain the first VR data.
  • the first virtual material is a specific material
  • the determining module is specifically configured to obtain the specific material corresponding to the first geographical location and the panoramic material requested by the terminal device;
  • the fixed material added with the specific material is rendered to obtain the first VR data.
  • the sending module is specifically configured to:
  • the terminal acquires the first geographic position of the terminal, and determines and sends the corresponding first VR data to the terminal device according to the first geographic position, to the terminal device.
  • the terminal device Provide VR data corresponding to the first location where the terminal device is located. This enriches the virtual content provided by VR to the user's terminal device.
  • a fourth aspect of the present application provides a virtual reality VR data processing device, including:
  • An acquisition module configured to acquire a first geographic location of a terminal device
  • a sending module configured to send the first geographic location to a network device, where the first geographic location is used by the network device to determine first virtual reality VR data corresponding to the first location;
  • a receiving module configured to receive first VR data sent by the network device
  • a processing module for displaying content corresponding to the first VR data.
  • the sending module is further configured to send preference information to the network device, where the preference information is used by the network device to obtain a first virtual device corresponding to the geographic location. After the material, indicates whether the network device renders the first virtual material according to a preset category.
  • the receiving module is specifically configured to receive the first VR data that is encoded and compressed and sent by the network device in a wireless communication manner.
  • the processing module is specifically configured to decode and decompress the first VR data that has undergone encoding and compression processing
  • the first VR location determined by the terminal device is obtained and sent to the network device to receive the first VR determined by the network device according to the first geographical position. Data, and display the content corresponding to the first VR data.
  • the terminal device can display different VR data corresponding to the first position where the terminal device is located in different geographical locations. This enriches the virtual content provided by VR to the user's terminal device.
  • an embodiment of the present application provides a virtual reality VR data processing device, including: a processor and a memory; the memory is used to store a program; the processor is used to call a program stored in the memory, To execute the virtual reality VR data processing method described in any one of the first aspect of the present application.
  • an embodiment of the present application provides a virtual reality VR data processing device, including: a processor and a memory; the memory for storing a program; the processor for calling a program stored in the memory, To execute the virtual reality VR data processing method described in any one of the second aspect of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores program code, and when the program code is executed, executes the program according to any one of the first aspect of the present application.
  • Virtual reality VR data processing method
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores program code, and when the program code is executed, the computer-readable storage medium executes Virtual reality VR data processing method.
  • FIG. 1 is a schematic structural diagram of a prior art VR system
  • FIG. 2 is a schematic structural diagram of a VR system applied in the present application
  • FIG. 3 is a schematic flowchart of a first embodiment of a VR data processing method according to the present application.
  • FIG. 4 is a schematic diagram of a geographic range of the present application.
  • FIG. 5 is a schematic flowchart of a second embodiment of a VR data processing method according to the present application.
  • FIG. 6 is a schematic flowchart of a third embodiment of a VR data processing method according to the present application.
  • FIG. 7 is a schematic diagram of a preset category in the preference information of the present application.
  • FIG. 8 is a schematic structural diagram of an embodiment of a VR data processing system according to the present application.
  • FIG. 9 is a schematic structural diagram of an embodiment of a VR data processing system according to the present application.
  • FIG. 10 is a schematic flowchart of a fourth embodiment of a VR data processing method according to the present application.
  • FIG. 11 is a schematic structural diagram of a first embodiment of a VR data processing apparatus according to the present application.
  • FIG. 12 is a schematic structural diagram of a second embodiment of a VR data processing apparatus according to the present application.
  • VR Virtual Reality
  • the simulation environment is a computer-generated, real-time and dynamic 3D stereo realistic image.
  • Perception means that ideal VR should have the perception of all people.
  • Natural skills refer to the movement of a person's head, eyes, gestures, or other human behaviors.
  • the computer processes data that is appropriate for the participant's actions, responds to user input in real time, and feeds back VR content to the user. .
  • VR content is generally divided into panoramic video and computer real-time rendering.
  • panoramic video VR content is the initial stage of the VR experience. It is generally used for 360-degree on-demand video and live broadcast.
  • the video is recorded in advance or recorded and stitched in real time.
  • the viewer can rotate the head up, down, left and right to watch different Angled picture. But its position in the scene is fixed, where the camera is located, the viewer must be there, there is only 3 degrees of freedom experience (3 types of translational degrees of freedom).
  • the real-time rendering of computer-generated VR content is a further development of VR technology.
  • Computer real-time rendering can actively create user perspectives through Computer Graphics Virtual Reality (CG VR) technology, allowing users wearing terminal devices in virtual scenes. Walking around makes the user wearing the terminal device extremely immersive, and can obtain a 6 degree of freedom experience (3 types of translational freedom and 3 types of rotational freedom).
  • CG VR Computer Graphics Virtual Reality
  • CG VR technology needs to render the VR content displayed by the VR terminal device in real time according to the user's movement in reality, so that the VR content presented to the user moves in the same way that the user moves in reality, so CG VR technology needs to pass VR positioning technology captures the movement of users in real space.
  • the VR positioning technology can determine the user's movement mode by locating the real-time relative position of VR devices such as a headset and a handle in space, so that the entire screen can be moved according to the user's movement in the real world.
  • the current VR positioning technology generally includes two methods, Inside-out and Outside-in.
  • Inside-out refers to sensing the external space information of the device through the built-in sensors, and determining the relative position of the VR device based on the external space information, thereby determining the movement mode of the VR device; Outside-in senses through the external sensor The relative position of the VR device determines the movement mode of the VR device.
  • FIG. 1 is a schematic structural diagram of a prior art VR system.
  • FIG. 1 shows a method and a process for providing VR content to a user by a server, a computer, and a headset together in the existing CG VR technology.
  • the headset and the computer are connected through a cable, and the headset end acquires the user's 6-degree-of-freedom motion posture data through an IMU sensor, and sends the data to the computer through a USB cable.
  • the computer renders virtual data such as virtual games, modeling, and virtual social networks in real time based on the motion posture of the user wearing the headset, and transmits the virtual data to the headset through an HDMI cable.
  • the headset receives the virtual data, it displays the virtual content corresponding to the virtual data to the user.
  • the computer directly renders its stored resources to generate virtual data; for online applications or games not stored in the computer, the computer needs to access the server through the Internet, request resources and download After saving to a computer, the computer renders the stored resources to generate virtual data.
  • the existing VR headset needs to be connected to the computer through a cable.
  • the computer can send high-fidelity images to the headset through the cable.
  • the hanging cable not only negatively affects the immersion, but also affects the immersion. Activity form and space size.
  • the present application combines various embodiments to provide a virtual display VR data processing method, device, and storage medium, so as to facilitate the enrichment of virtual content presented by VR to users.
  • FIG. 2 is a schematic structural diagram of a VR system applied in the present application.
  • the terminal device is connected to the network device through a WIreless-Fidelity (WiFi) or 5th-Generation (5G) network, and the terminal device can directly pass through WiFi or 5G
  • WiFi WIreless-Fidelity
  • 5G 5th-Generation
  • the network requests VR content from the network device.
  • the network device may be a cloud server.
  • the network device can provide computing processing, adaptive resource scheduling, rendering, and data processing functions and adopts a cloud scheduling mechanism to implement multi-user cloud sharing server hardware (GPU, codec, etc.) resources, thereby saving user terminal investment and simplifying The structure of a VR system.
  • GPU GPU, codec, etc.
  • the terminal device may be any terminal having data processing and VR content display functions.
  • the terminal may also be referred to as a terminal, a user equipment (UE), a mobile station (MS), a mobile terminal (MT), or the like.
  • the terminal can be a VR headset, a mobile phone, a tablet, a computer with wireless transceiver function, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, industrial control Wireless terminal in industrial control, wireless terminal in self driving, wireless terminal in remote medical surgery, wireless terminal in smart grid, transportation safety Wireless terminals in smart phones, wireless terminals in smart cities, wireless terminals in smart homes, and the like.
  • VR virtual reality
  • AR augmented reality
  • the system can also include multiple terminal devices, and multiple terminal devices can be connected to network devices through WiFi or 5G networks at the same time.
  • Network devices can provide VR content to multiple terminal devices simultaneously.
  • the VR system provided in this embodiment can rely on a wireless network to place complex rendering and calculations in the cloud, greatly reducing the requirements for terminal performance, reducing the cost of VR terminals, and accelerating the popularity of VR.
  • the requirements for terminal equipment only need basic capabilities such as network connection, data processing, and screen display capabilities, and a good experience can be obtained, and the threshold for user consumption is greatly reduced.
  • the VR system provided by this embodiment can also implement real-time cloud rendering of interactive VR content, so that users can use VR services at any time and any place in streaming media, and multiple users share hardware resources, saving user investment.
  • the powerful cloud server has greatly improved computing and image processing capabilities.
  • the ubiquitous mobile broadband network provides a more natural business model.
  • FIG. 3 is a schematic flowchart of a first embodiment of a VR data processing method according to the present application. The method shown in FIG. 3 is used to be implemented in the system shown in FIG. 2.
  • the VR data processing method shown in FIG. 3 includes:
  • the terminal device sends a first geographical position to the network device, and the network device receives the first geographical position sent by the terminal device.
  • the terminal device needs to first obtain the first geographical position where the terminal device is located, and then send the first geographical position to the network device.
  • the geographic location in this application refers to geographic location based on a wide area, which is different from the relative position of a VR terminal device obtained in the prior art.
  • the terminal device may obtain the wide-area geographic location coordinates where the terminal device is located in real-time by means of a built-in positioning module.
  • the positioning module may preferably be a Global Positioning System (GPS) chip based on a real-time dynamic (RTK) algorithm.
  • GPS Global Positioning System
  • RTK real-time dynamic
  • the terminal device is connected to the network device through the WiFi or 5G network in the system shown in FIG. 2.
  • the terminal device and the network device can perform wireless communication with each other through the WiFi or 5G network, so the terminal device can pass the first geographical position through the WiFi or 5G network. It is sent to the network device by wireless communication to realize the IP transmission of the terminal device and the network device.
  • the terminal device may encrypt the first geographic location information and send it to the network device.
  • the network device determines first VR data corresponding to the first geographic location.
  • the geographical location of the terminal device is not considered, and users in different geographical locations see the same content through the terminal device.
  • the window of the virtual cabin in the VR content is Tiananmen.
  • the virtual cabin window they see is Tiananmen.
  • users in Shanghai want to be able to see Shanghai's Bund outside the window of the virtual cabin.
  • the VR content provided by the network equipment to the terminal device is limited to Tiananmen Square, the geographical location-based content display desired by the user cannot be achieved.
  • the network device when it receives the first geographical position of the terminal device in S101, it determines the corresponding first VR data according to the first geographical position.
  • the first VR data corresponds uniquely to the first geographic location, and different geographic locations correspond to different VR data.
  • the terminal device reports that the network device has a geographic location of Beijing, and the network device determines the virtual cabin window of the VR content displayed to the terminal device in Beijing according to the geographic location. It is Tiananmen Square; when a user uses a terminal device in Shanghai, the network device determines the virtual cottage window of VR content displayed to the terminal device in Shanghai according to the geographic location as the Bund.
  • a possible implementation manner for the network device to determine the first VR data corresponding to the first location in S102 may include: obtaining a first virtual material corresponding to the first geographic location; and rendering the first virtual material.
  • the material gets the first VR data.
  • the first VR data refers to the rendered video stream.
  • the network device sends the video stream obtained by rendering the virtual material to the terminal device, and the terminal device displays the received video stream.
  • rendering the first virtual material in this embodiment to obtain the first VR data includes: rendering the panoramic material to obtain the first VR data.
  • the first virtual material is a virtual scene of the ancient building in ancient times
  • the first virtual material is A virtual material is a virtual scene of the underwater world.
  • rendering the first virtual material in this embodiment to obtain the first virtual reality VR data includes: obtaining the specific material corresponding to the first geographical location and the panoramic material requested by the terminal device; The fixed material is added to the specific material; the fixed material added to the specific material is rendered to obtain the first VR data.
  • the VR content provided by the network device to the terminal device is a virtual cabin, and the virtual cabin is a fixed material in the VR content that will not be changed.
  • the network device determines that the first virtual material is Tiananmen, and the network device adds Tiananmen to the fixed material of the virtual cabin, so that the window of the virtual cabin is Tiananmen.
  • the network device renders the indoor scene added to Tiananmen to obtain the first VR data according to the posture data of the terminal device, and sends the first VR data to the terminal device, and the terminal device can display the content of the first VR data.
  • S102 may specifically include: obtaining the first virtual material corresponding to the first geographical location from the storage device, and storing the virtual material corresponding to N geographical locations in the storage device, where N is a positive integer.
  • the network device provided in this embodiment may preferably be composed of two parts, one is a computing cloud for analyzing and calculating data, and the other is a storage cloud specifically for storing virtual materials corresponding to different geographical locations. , So that the network device can obtain virtual materials corresponding to different geographical locations from its local area.
  • FIG. 4 is a schematic diagram of the geographic scope of the present application.
  • the network device adaptively renders the virtual material based on location A; if location A does not have a corresponding virtual material, a certain threshold range can be set to render the second geographical location
  • the range of the second geographical position is greater than the range of the first geographical position, and the second geographical position includes the first geographical position.
  • regular fixed clips are rendered.
  • adaptive rendering is based on the virtual material corresponding to location B. If the virtual material corresponding to location B can set a certain threshold range, the rendering of location B1 or location B2 Virtual footage. If there are no geographically located clips within the threshold range, regular fixed clips are rendered.
  • the VR content provided by the network device to the terminal device is a virtual cabin
  • the picture outside the window of the virtual cabin is the sea
  • the virtual cabin is the fixed material in the VR content. It will not be changed.
  • the network device determines that the first virtual material is Tiananmen, and the network device adds Tiananmen to the fixed material of the virtual cabin, so that the window of the virtual cabin is Tiananmen. If the first virtual material corresponding to Beijing is not stored in the network device, the selection range of the geographical location is expanded, and it is determined that the first virtual material uses the Great Wall corresponding to China. Then the network device adds the Great Wall to the fixed material of the virtual cabin, so that the virtual The Great Wall is outside the window of the hut. If the first virtual material corresponding to the geographical location of China is not stored in the network device, it is determined that the first virtual material is the content in the fixed material, that is, the picture outside the window of the virtual cottage is the sea.
  • S103 The network device sends the first VR data to the terminal device, and the terminal device receives the first VR data sent by the network device.
  • S104 The terminal device displays content corresponding to the first VR data.
  • the network device and the terminal device are connected through a WiFi or 5G network.
  • the terminal device may send the movement posture data of the terminal device and the first geographic position to the network device through a WiFi or 5G network.
  • the first VR data may also be sent directly to the terminal device through a WiFi or 5G network.
  • the first VR data refers to a rendered video stream such as YUV / VGA data.
  • network equipment encodes and compresses the VR data and sends it to the terminal device via WiFi or 5G network.
  • the terminal device decodes and decompresses the received data to display the VR content. .
  • encoding and decompression processing may be performed on the received first VR data that has undergone encoding and compression processing by setting a codec / compression / decompression module in the terminal device.
  • FIG. 5 is a schematic flowchart of a second embodiment of a VR data processing method according to the present application.
  • the embodiment shown in FIG. 5 is based on the embodiment shown in FIG. 3.
  • a manner in which a network device sends VR data to a terminal device through WiFi or a 5G network includes: S301: the network device sends the first VR data Perform encoding and compression.
  • S303 The terminal device encodes and decompresses the first VR data that has undergone the encoding and compression processing.
  • S304 The terminal device displays the first VR data after decoding and decompression processing.
  • FIG. 6 is a schematic flowchart of a third embodiment of a VR data processing method of the present application.
  • the embodiment shown in FIG. 6 is based on the embodiment shown in FIG. 3, and after S101, the method specifically includes: S201: Obtain preference information sent by the terminal device, and the preference information is used to indicate whether to render the first virtual object according to a preset category Material; S202: If the preference information indicates that the first virtual material is rendered according to the preset category, render the first virtual material according to the preset category to obtain the first VR data.
  • the virtual material in addition to the network device directly based on the geographical location of the terminal device, the virtual material may also be determined according to the preference of the user of the terminal device. It renders as regular fixed material.
  • the user preference selection mechanism may be a user may select a preferred scene category according to an option on the terminal, or a specific renderer category.
  • FIG. 7 is a schematic diagram of a preset category in the preference information of the present application.
  • the terminal device can provide the user with four types of preset category options based on geographic location, and generate preference information according to the user's selection and send it to the network device, so that the network device determines whether to enable rendering according to a certain category. Virtual footage.
  • Category 1 can mean that the weather is different based on the geographical location. After it is enabled, the actual weather at location A is raining. The rainy sky and ground are adaptively rendered. You can see the window in the virtual room It's raining; when it arrives at location B, the actual weather is snowing, and the snowy sky and ground are adaptively rendered. In the virtual room, you can walk outside to feel the snowflakes falling. If not enabled, it will be a fixed sunny scene everywhere.
  • Category two can mean that the pets in the room are different based on different geographic locations. When enabled, there is a kitten running in the room at location A. At location B, the puppy is called in the room.
  • Category three can refer to a painting in the room. If the adaptive mechanism based on geographic location is not enabled, this painting will never change. After it is enabled, this painting will show different pictures depending on the location.
  • Category 4 can be the scenery outside the room. If Category 4 is enabled, the scenery outside the room will switch to different scenery based on the location. For example, location A switches to the beach on the seashore, and location B switches to the waterfall between the mountains. If it is not enabled, it is only normal. Fixed scene.
  • FIG. 8 is a schematic structural diagram of an embodiment of a VR data processing system of the present application.
  • the system shown in FIG. 8 may be used to execute the VR data processing method in the foregoing embodiments.
  • the terminal device in the foregoing embodiment is a headset.
  • the headset includes an inertial (IMU) sensor and an inside-out camera in the prior art.
  • the headset in this embodiment also includes a headset. GPS positioning module and codec / compression and decompression module.
  • the positioning module (such as GPS based on RTK algorithm) added by the HMD is used to obtain wide-area geographic coordinates.
  • the codec / compression / decompression module is used to process the video stream.
  • the network device in the foregoing embodiment is taken as an example of a cloud server.
  • the cloud server includes a computing cloud and a storage cloud.
  • the computing cloud is used to analyze and calculate data.
  • the storage cloud provides materials based on wide-area geographic locations to store different geographic locations.
  • the virtual material corresponding to the position.
  • Cloud computing Cloud calculates and processes motion data, based on wide-area geographic location, adaptively requests virtual materials and renders, encodes / compresses, and headsets and computing clouds implement IP-based transmission.
  • the headset and the cloud server in this embodiment may adopt a client / server Client-server mode to implement multi-user sharing of host resources.
  • FIG. 9 is a schematic structural diagram of an embodiment of a VR data processing system according to the present application.
  • the system shown in FIG. 9 shows the key modules and the processing flow of the system in FIG. 8 in more detail.
  • 101 is the head-end display terminal, which integrates IMU, Inside-out camera and positioning module, and can obtain the relative position and actual geographic position of the head-mounted display at the same time.
  • the IMU obtains 3 DOF attitude data of the headset;
  • the Inside-out camera has no external sensors and can move freely in space to obtain 6 DOF motion data;
  • the GPS positioning module obtains the latitude and longitude coordinate data of the actual geographical position.
  • 102 is a computing cloud, which completes decoding / decompression of position and pose data, and coordinates calculation matching.
  • 103 is a storage cloud that provides applications and materials based on wide-area geographic locations: including virtual homes on various platforms and third-party applications such as panoramic video, interactive experiences, interactive games, etc., providing fixed rendering materials and geo-based Latitude and longitude coordinate scene construction material assets, including models, materials, and so on.
  • 104 is the 3 DOF attitude data obtained by the IMU on the headset, the 6 DOF motion data obtained by the Inside-out camera, and the latitude and longitude coordinate data obtained by the positioning module.
  • the network such as 5G or The WiFi network is uploaded to the computing cloud.
  • 105 is a computing cloud that sends a resource request to the storage cloud based on geographic latitude and longitude coordinates and action posture data, and instructions that the user prefers to select.
  • 106 retrieves asset data, including models, materials, etc. for third-party storage clouds.
  • 107 is the data rendered, encoded / compressed by the computing cloud and then sent to the headset via a network such as 5G or WiFi.
  • FIG. 10 is a schematic flowchart of a fourth embodiment of a VR data processing method according to the present application.
  • FIG. 10 shows a possible embodiment combining the VR data method in the foregoing embodiments.
  • this embodiment includes: starting from 201, wearing a headset, entering a virtual home or entering a third-party application.
  • 203 Upload the position information (latitude and longitude data, 6-DOF motion data, and 3 DOF attitude data) to the computing cloud, complete decoding of the position data, and coordinate calculation matching.
  • the user preference selection mechanism The user preference selection mechanism.
  • the user may also select a user-selected scene category or a specific render category according to the options on the terminal.
  • the computing cloud sends a resource request to the storage cloud, and based on the geographic location threshold, determines whether there are materials based on geographic latitude and longitude. 206 If the material based on geographic latitude and longitude within the threshold is not met, the storage cloud retrieves the fixed material based on the pose and motion data.
  • the storage cloud retrieves all the materials based on geographic latitude and longitude based on pose and motion data or some fixed materials based on geographic latitude and longitude; if there is user preference material information, select and render based on this information 208 Determine whether the information carried by the material needs to be superimposed on the real geographic coordinates, that is, the information carried by the material is consistent with the real geographic coordinates.
  • the 209 coordinate transformation enables the virtual scene to be superimposed on the correct position in real space.
  • computing cloud renders the picture in the field of vision, encodes and compresses the original YUV / VGA data and sends it to the headset. After receiving the data and decoding / decompressing it, the 211 headset displays the image.
  • the VR data processing method, device, and storage medium can implement a geographically differentiated VR data processing method for virtual content.
  • a GPS positioning module, encoding / decoding / compression are added to the headset.
  • Decompression module action data is uploaded to the cloud, calculation and rendering are changed from local to the cloud, and an adaptive mechanism based on geographic location and user preference selection mechanism are added; on the content side, the storage cloud adds applications and materials based on wide-area geographic location , Which enriches the virtual content that VR presents to users.
  • the gain compared to traditional solutions can be differentiated in content presentation, such as different rendering materials in different locations, different cities, and different VR content.
  • It also implements an adaptive rendering mechanism based on cloud-adaptive, moving local complex image processing to the cloud, and real-time cloud rendering of interactive VR content. Users can use VR services at any time and anywhere by streaming media, and Multiple users share hardware resources, saving user investment.
  • this application is different from the current fixed rendering material, and the cloud adaptively transmits a geographically different rendering combination to the headset based on the wide-area geographic location data. It has reference significance to the application design of cloud rendering mechanism based on adaptive virtual content based on wide area geographic location.
  • This application also proposes a cloud virtualization architecture, a cloud scheduling mechanism that integrates computing processing, adaptive resource scheduling, adaptive rendering, and encoding / compression. Realize multi-user sharing of host hardware resources, saving user terminal investment.
  • the application also has great commercial value for application scenarios based on wide-area geographic locations, such as urban planning, architectural design, tourism, education, and so on.
  • the differentiation of content based on geographic locations can increase interest.
  • List of differentiated gain scenes 1) For virtual cottages, after the content is differentiated, different virtual scenes can be rendered in different cities. For example, in Shanghai, Microsoft's cliff cottages are switched to bungalows; interior decoration For example, a picture on the wall or a pet can be switched adaptively based on the geographic location. 2) For panoramic video applications, the cabin can also be adaptively switched based on the geographic location, and the video content presented based on the geographic location of the wide area is also different.
  • FIG. 11 is a schematic structural diagram of a first embodiment of a VR data processing apparatus according to the present application.
  • the VR data processing apparatus provided in this embodiment may be a network device in any of the foregoing embodiments.
  • the VR data processing apparatus provided in this embodiment includes an obtaining module 1101, a determining module 1102, and a sending module 1103.
  • the obtaining module 1101 is configured to obtain a first geographical position of the terminal device;
  • the determining module 1102 is configured to determine first virtual reality VR data corresponding to the first geographical position;
  • the sending module 1103 is configured to send the first VR data to the terminal device
  • the content corresponding to the first VR data is used for display on a terminal device.
  • the VR data processing apparatus provided in this embodiment is configured to execute the VR data processing method shown in FIG. 3.
  • the specific implementation manner is the same as the principle, and details are not described herein again.
  • the determining module 1102 is specifically configured to obtain a first virtual material corresponding to the first geographic location; render the first virtual material to obtain first VR data.
  • the obtaining module 1101 is further configured to obtain preference information sent by the terminal device, where the preference information is used to indicate whether to render the first virtual material according to a preset category;
  • the determining module 1102 is specifically configured to: if the preference information indicates that the first virtual material is rendered according to a preset category, render the first virtual material according to the preset category to obtain the first VR data.
  • the determining module 1102 is specifically configured to obtain the first virtual material corresponding to the first geographical location from a storage device, where the storage device stores virtual materials corresponding to N geographical locations, where N is a positive integer.
  • the determining module 1102 is specifically configured to: if the virtual material corresponding to the first geographical location does not exist in the storage device, obtain the second virtual material corresponding to the second geographical location from the storage device, where the The range is greater than the range of the first geographic location.
  • the first virtual material is a panoramic material; the determining module 1102 is specifically configured to render the panoramic material to obtain the first VR data.
  • the first virtual material is a specific material
  • the determining module 1102 is specifically configured to obtain the specific material corresponding to the first geographical location and the panoramic material requested by the terminal device; add the specific material to the fixed material; render the fixed material added to the fixed Material to get the first VR data.
  • the sending module 1103 is specifically configured to encode and compress the first VR data; and send the encoded and compressed first VR data to the terminal device through a wireless communication method.
  • the VR data processing apparatus provided in this embodiment is configured to execute the VR data processing method shown in the foregoing embodiments, and the specific implementation manners and principles thereof are the same, and details are not described herein again.
  • FIG. 12 is a schematic structural diagram of a second embodiment of a VR data processing apparatus according to the present application.
  • the VR data processing apparatus provided in this embodiment may be a terminal device in any of the foregoing embodiments.
  • the VR data processing apparatus provided in this embodiment includes: an obtaining module 1201, a sending module 1202, a receiving module 1203, and a processing module 1204.
  • the obtaining module 1201 is configured to obtain a first geographic location of the terminal device;
  • the sending module 1202 is configured to send a first geographic location to the network device, and the first geographic location is used by the network device to determine a first virtual location corresponding to the first location.
  • the receiving module 1203 is configured to receive the first VR data sent by the network device;
  • the processing module 1204 is configured to display the content corresponding to the first VR data.
  • the VR data processing apparatus provided in this embodiment is configured to execute the VR data processing method shown in FIG. 3.
  • the specific implementation manner is the same as the principle, and details are not described herein again.
  • the sending module 1202 is further configured to send preference information to the network device, and the preference information is used for the network device to indicate whether the network device renders the first virtual material according to a preset category after acquiring the first virtual material corresponding to the geographical position .
  • the receiving module 1203 is specifically configured to receive the encoded and compressed first VR data sent by the network device through a wireless communication method.
  • the processing module 1204 is specifically configured to decode and decompress the first VR data that has been encoded and compressed; and display the first VR data that has been decoded and decompressed.
  • the VR data processing apparatus provided in this embodiment is configured to execute the VR data processing method shown in the foregoing embodiments, and the specific implementation manners and principles thereof are the same, and details are not described herein again.
  • the present application further provides a VR data processing system, including N network devices according to any one of the foregoing embodiments in FIG. 10 and M terminal devices according to any one of the foregoing embodiments in FIG. 11, M and N is a positive integer.
  • the present application also provides a VR data processing device, including: a processor and a memory, where the memory is used to store a program; and the processor is used to call the program stored in the memory to execute the VR data processing method according to any one of the foregoing embodiments.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores program code, and when the program code is executed, the VR data processing method as in any one of the foregoing embodiments is performed.
  • the present application also provides a computer program product.
  • the program code included in the computer program product is executed by a processor, the VR data processing method according to any one of the foregoing embodiments is implemented.
  • a person of ordinary skill in the art may understand that all or part of the steps of implementing the foregoing method embodiments may be implemented by a program instructing related hardware.
  • the aforementioned program may be stored in a computer-readable storage medium.
  • the steps including the foregoing method embodiments are performed; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请提供一种虚拟现实VR数据处理方法、装置及存储介质,通过获取终端设备所处的第一地理位置并发送至网络设备,使网络设备根据第一地理位置确定并向终端设备发送对应的第一VR数据,终端设备接收网络设备根据第一地理位置确定的第一VR数据后显示第一VR数据所对应的内容。本申请提供的VR数据处理方法、装置及存储介质能够使得终端设备在不同的地理位置能够显示不同的、与该终端设备所处第一位置对应的VR数据,从而丰富了VR向用户的终端设备提供的虚拟内容。

Description

虚拟现实VR数据处理方法、装置及存储介质 技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实(Virtual Reality,简称:VR)数据处理方法、装置及系统。
背景技术
虚拟现实(Virtual Reality,VR)是一种能够通过计算机仿真显示虚拟内容的显示技术,作为近年来最炙手可热的显示技术被厂商和用户所追捧。计算机图形虚拟现实(Computer Graphics Virtual Reality,CG VR)是VR技术的进一步发展,CG VR能够主动创造用户视角,允许佩戴终端设备的用户在虚拟场景中随意走动,使得佩戴终端设备的用户具有极高的沉浸感,可以获得6自由度的体验。
现有的CG VR技术中,计算机根据终端设备的运动姿态实时渲染生成虚拟的游戏、建模、虚拟社交等虚拟数据,并通过线缆将虚拟数据传输至终端设备。终端设备收到虚拟数据后,向用户显示虚拟数据对应的虚拟内容。
采用现有技术,计算机渲染生成虚拟内容所使用的素材都是固定的,因此计算机向终端设备传输的虚拟数据也都是固定的,导致了终端设备向用户呈现的虚拟内容较为单一。
发明内容
本申请结合多种实施方式,提供了一种虚拟显示VR数据处理方法、装置及存储介质,以利于丰富VR向用户呈现的虚拟内容。
本申请第一方面提供一种虚拟现实VR数据处理方法,包括:
获取终端设备的第一地理位置;
确定与所述第一地理位置对应的第一虚拟现实VR数据;
向所述终端设备发送所述第一VR数据,其中,所述第一VR数据对应的内容用于在所述终端设备显示。
在本申请第一方面一实施例中,所述确定与所述第一地理位置对应的第一VR数据,包括:
获取与所述第一地理位置对应的第一虚拟素材;
渲染所述第一虚拟素材得到所述第一VR数据。
在本申请第一方面一实施例中,还包括:
获取所述终端设备发送的偏好信息,所述偏好信息用于指示是否根据预设类别渲染所述第一虚拟素材;
所述渲染所述第一虚拟素材得到第一VR数据,包括:
若所述偏好信息指示根据预设类别渲染所述第一虚拟素材,则根据所述预设类别渲染所述第一虚拟素材得到所述第一VR数据。
在本申请第一方面一实施例中,所述确定与所述第一地理位置对应的第一虚拟素材, 包括:
从存储设备中获取所述第一地理位置对应的第一虚拟素材,其中,所述存储设备中存储N个地理位置所对应的虚拟素材,所述N为正整数。
在本申请第一方面一实施例中,所述从存储设备中获取所述第一地理位置对应的第一虚拟素材,包括:
若所述存储设备中不存在所述第一地理位置对应的虚拟素材,则从所述存储设备中获取所述第二地理位置对应的第二虚拟素材,其中,所述第二地理位置的范围大于所述第一地理位置的范围。
在本申请第一方面一实施例中,所述第一虚拟素材为全景素材;
则所述渲染所述第一虚拟素材得到第一VR数据,包括:
渲染所述全景素材得到所述第一VR数据。
在本申请第一方面一实施例中,所述第一虚拟素材为特定素材;
则所述渲染所述第一虚拟素材得到第一虚拟现实VR数据,包括:
获取所述第一地理位置对应的特定素材和所述终端设备请求的全景素材;
将所述特定素材加入所述固定素材;
渲染加入所述特定素材的所述固定素材,得到所述第一VR数据。
在本申请第一方面一实施例中,所述向所述终端设备发送所述第一VR数据,包括:
对所述第一VR数据进行编码和压缩处理;
通过无线通信方式向所述终端设备发送编码和压缩处理后的所述第一VR数据。
综上,本申请第一方面提供的虚拟现实VR数据处理方法中,通过获取终端的第一地理位置,并根据第一地理位置确定并向终端设备发送对应的第一VR数据,以向终端设备提供与该终端设备所处第一位置对应的VR数据。使得终端设备在不同的地理位置能够显示不同的、与该终端设备所处第一位置对应的VR数据。从而丰富了VR向用户的终端设备提供的虚拟内容。
本申请第二方面提供一种虚拟现实VR数据处理方法,包括:
获取终端设备的第一地理位置;
向网络设备发送所述第一地理位置,所述第一地理位置用于所述网络设备确定与所述第一位置对应的第一虚拟现实VR数据;
接收所述网络设备发送的第一VR数据;
显示所述第一VR数据对应的内容。
在本申请第二方面一实施例中,还包括:
向所述网络设备发送偏好信息,所述偏好信息用于所述网络设备在获取与所述地理位置对应的第一虚拟素材后,指示所述网络设备是否按照预设类别渲染所述第一虚拟素材。
在本申请第二方面一实施例中,所述接收所述网络设备发送的第一VR数据,包括:
通过无线通信方式接收所述网络设备发送的经过编码和压缩处理的所述第一VR数据。
在本申请第二方面一实施例中,所述显示所述第一VR数据对应的内容,包括:
对经过编码和压缩处理的所述第一VR数据进行解码和解压处理;
显示解码和解压处理后的所述第一VR数据。
综上,本申请第二方面提供的虚拟现实VR数据处理方法中,通过获取终端设备所 处的第一地理位置并发送至网络设备,以接收网络设备根据第一地理位置确定的第一VR数据,并显示第一VR数据所对应的内容。使得终端设备在不同的地理位置能够显示不同的、与该终端设备所处第一位置对应的VR数据。从而丰富了VR向用户的终端设备提供的虚拟内容。
本申请第三方面提供一种虚拟现实VR数据处理装置,包括:
获取模块,所述获取模块用于获取终端设备的第一地理位置;
确定模块,所述确定模块用于确定与所述第一地理位置对应的第一虚拟现实VR数据;
发送模块,所述发送模块用于向所述终端设备发送所述第一VR数据,其中,所述第一VR数据对应的内容用于在所述终端设备显示。
在本申请第三方面一实施例中,所述确定模块具体用于,
获取与所述第一地理位置对应的第一虚拟素材;
渲染所述第一虚拟素材得到所述第一VR数据。
在本申请第三方面一实施例中,所述获取模块还用于,获取所述终端设备发送的偏好信息,所述偏好信息用于指示是否根据预设类别渲染所述第一虚拟素材;
所述确定模块具体用于,若所述偏好信息指示根据预设类别渲染所述第一虚拟素材,则根据所述预设类别渲染所述第一虚拟素材得到所述第一VR数据。
在本申请第三方面一实施例中,所述确定模块具体用于,
从存储设备中获取所述第一地理位置对应的第一虚拟素材,其中,所述存储设备中存储N个地理位置所对应的虚拟素材,所述N为正整数。
在本申请第三方面一实施例中,所述确定模块具体用于,若所述存储设备中不存在所述第一地理位置对应的虚拟素材,则从所述存储设备中获取所述第二地理位置对应的第二虚拟素材,其中,所述第二地理位置的范围大于所述第一地理位置的范围。
在本申请第三方面一实施例中,所述第一虚拟素材为全景素材;
则所述确定模块具体用于,渲染所述全景素材得到所述第一VR数据。
在本申请第三方面一实施例中,所述第一虚拟素材为特定素材;
则所述确定模块具体用于,获取所述第一地理位置对应的特定素材和所述终端设备请求的全景素材;
将所述特定素材加入所述固定素材;
渲染加入所述特定素材的所述固定素材,得到所述第一VR数据。
在本申请第三方面一实施例中,所述发送模块具体用于,
对所述第一VR数据进行编码和压缩处理;
通过无线通信方式向所述终端设备发送编码和压缩处理后的所述第一VR数据。
综上,本申请第三方面提供的虚拟现实VR数据处理装置中,通过获取终端的第一地理位置,并根据第一地理位置确定并向终端设备发送对应的第一VR数据,以向终端设备提供与该终端设备所处第一位置对应的VR数据。从而丰富了VR向用户的终端设备提供的虚拟内容。
本申请第四方面提供一种虚拟现实VR数据处理装置,包括:
获取模块,所述获取模块用于,获取终端设备的第一地理位置;
发送模块,所述发送模块用于,向网络设备发送所述第一地理位置,所述第一地理位 置用于所述网络设备确定与所述第一位置对应的第一虚拟现实VR数据;
接收模块,所述接收模块用于,接收所述网络设备发送的第一VR数据;
处理模块,所述处理模块用于,显示所述第一VR数据对应的内容。
在本申请第四方面一实施例中,所述发送模块还用于,向所述网络设备发送偏好信息,所述偏好信息用于所述网络设备在获取与所述地理位置对应的第一虚拟素材后,指示所述网络设备是否按照预设类别渲染所述第一虚拟素材。
在本申请第四方面一实施例中,所述接收模块具体用于,通过无线通信方式接收所述网络设备发送的经过编码和压缩处理的所述第一VR数据。
在本申请第四方面一实施例中,所述处理模块具体用于,对经过编码和压缩处理的所述第一VR数据进行解码和解压处理;
显示解码和解压处理后的所述第一VR数据。
综上,在本申请第四方面提供的虚拟现实VR数据处理装置中,通过获取终端设备所处的第一地理位置并发送至网络设备,以接收网络设备根据第一地理位置确定的第一VR数据,并显示第一VR数据所对应的内容。使得终端设备在不同的地理位置能够显示不同的、与该终端设备所处第一位置对应的VR数据。从而丰富了VR向用户的终端设备提供的虚拟内容。
第五方面,本申请实施例提供一种虚拟现实VR数据处理装置,包括:处理器和存储器;所述存储器,用于存储程序;所述处理器,用于调用所述存储器所存储的程序,以执行本申请第一方面中任一所述的虚拟现实VR数据处理方法。
第六方面,本申请实施例提供一种虚拟现实VR数据处理装置,包括:处理器和存储器;所述存储器,用于存储程序;所述处理器,用于调用所述存储器所存储的程序,以执行本申请第二方面中任一所述的虚拟现实VR数据处理方法。
第七方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储程序代码,当所述程序代码被执行时,以执行如本申请第一方面任一所述的虚拟现实VR数据处理方法。
第八方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储程序代码,当所述程序代码被执行时,以执行如本申请第二方面任一所述的虚拟现实VR数据处理方法。
附图说明
图1是现有技术VR系统的结构示意图;
图2是本申请所应用的VR系统的结构示意图;
图3是本申请VR数据处理方法实施例一的流程示意图;
图4是本申请地理位置范围的示意图;
图5是本申请VR数据处理方法实施例二的流程示意图;
图6是本申请VR数据处理方法实施例三的流程示意图;
图7是本申请偏好信息中预设类别的示意图;
图8是本申请VR数据处理系统一实施例的结构示意图;
图9是本申请VR数据处理系统一实施例的结构示意图;
图10是本申请VR数据处理方法实施例四的流程示意图;
图11是本申请VR数据处理装置实施例一的结构示意图;
图12是本申请VR数据处理装置实施例二的结构示意图。
具体实施方式
虚拟现实(Virtual Reality,VR)是一种能够通过计算机仿真显示虚拟内容的显示技术,作为近年来最炙手可热的显示技术被厂商和用户所追捧。VR作为仿真技术的一个重要方向,是仿真技术与计算机图形学人机接口技术多媒体技术传感技术网络技术等多种技术的集合,主要包括模拟环境、感知、自然技能和传感设备等方面。模拟环境是由计算机生成的、实时动态的三维立体逼真图像。感知是指理想的VR应该具有一切人所具有的感知。自然技能是指人的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与参与者的动作相适应的数据,并对用户的输入做出实时响应,将VR内容反馈给用户。
VR内容一般分为全景视频类和计算机实时渲染类。其中,全景视频类的VR内容是VR体验的初级阶段,一般用于360度实景视频点播和直播,其视频是提前录制或者实时录制和拼接的,观看者可以上下、左右旋转头部,观看不同角度的画面。但是其在场景中的位置是固定的,摄像机位在哪儿,观看者就必须在哪儿,只有3自由度的体验(3种类型的平移自由度)。而计算机实时渲染类的VR内容是VR技术的进一步发展,计算机实时渲染类能够通过计算机图形虚拟现实(Computer Graphics Virtual Reality,CG VR)技术主动创造用户视角,允许佩戴终端设备的用户在虚拟场景中随意走动,使得佩戴终端设备的用户具有极高的沉浸感,可以获得6自由度的体验(3种类型的平移自由度和3种类型的旋转自由度)。
由于CG VR技术需要根据用户在现实中的移动方式实时渲染VR终端设备所显示的VR内容,使得向用户呈现的VR内容,以用户在现实中相同的移动方式进行移动,因此CG VR技术需要通过VR定位技术采集用户在现实空间中的移动方式。具体地,VR定位技术可以通过定位头显及手柄等VR设备在空间的实时相对位置,确定用户的移动方式,从而使整个画面可以根据用户在现实世界中的移动而进行移动。更为具体地,目前VR定位技术一般包括两种方式,Inside-out和Outside-in。其中,Inside-out是指通过内置的传感器来感知设备外部的空间信息,并根据外部空间信息确定VR设备所在的相对位置,进而确定VR设备的移动方式;Outside-in通过外置的传感器来感知VR设备所在的相对位置,进而确定VR设备的移动方式。
例如:图1是现有技术VR系统的结构示意图。图1示出了现有的CG VR技术中,服务器、电脑和头显共同为用户提供VR内容的方法及流程。具体地,头显与电脑之间通过线缆进行连接,头显端通过IMU传感器获取用户的6自由度运动姿态数据,并通过USB线缆发送至电脑。电脑根据佩戴头显用户的运动姿态实时渲染生成虚拟的游戏、建模、虚拟社交等虚拟数据,并通过HDMI线缆将虚拟数据传输至头显。当头显收到虚拟数据后,向用户显示虚拟数据对应的虚拟内容。其中,对于存在于电脑中的本地应用或游戏,电脑直接对其存储的资源进行渲染以生成虚拟数据;对于未存储在电脑中的在线应用或游戏,电脑需要通过互联网访问服务器,请求资源并下载存储至电脑后,由电脑对存储的资源进行渲染以生成虚拟数据。
但是,在而现有的VR系统中,由于计算机渲染生成虚拟内容所使用的素材都是固定的,因此计算机向终端设备传输的虚拟数据也都是固定的,导致了终端设备向用户呈现的虚拟内容较为单一。具体反映在VR内容上没有体现广域与局域差异化的应用,即呈现的内容不会因为地理位置的不同而产生差异。此外,现有的VR系统的头显需要通过线缆与电脑相连,电脑通过线缆才能将高保真度的图像发送到头显,悬挂的缆线不仅对沉浸感产生了负面影响,同时更影响了活动形式和空间大小。并且,即便通过无线方案减掉了线缆,对于典型的VR交互类业务,无论是VR本地的单人或多人游戏,乃至VR联网游戏等,都采用本地渲染,需要有强大的计算和渲染能力的终端,才能获得良好的游戏体验。由于高端电脑或主机的价格昂贵,限制了VR业务在广大用户中的普及。
综上,本申请结合多种实施方式,提供一种虚拟显示VR数据处理方法、装置及存储介质,以利于丰富VR向用户呈现的虚拟内容。
具体地,图2是本申请所应用的VR系统的结构示意图。如图2所示的系统中,终端设备通过无线保真(WIreless-Fidelity,WiFi)或第五代移动通信技术(5th-Generation,5G)网络连接网络设备,并且终端设备能够直接通过WiFi或5G网络向网络设备请求VR内容。
其中,网络设备可以是云端服务器。该网络设备能够提供计算处理、自适应资源调度、渲染、数据处理功能并采用云端调度机制,实现多用户云端共享服务器的硬件(GPU、编解码器等)资源,从而节省用户终端投资,简化了VR系统的结构。
终端设备可以任何是具备数据处理及VR内容显示功能的终端。其中,终端也可以称为终端Terminal、用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等。终端可以是VR头显、手机(mobile phone)、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(Virtual Reality,VR)终端设备、增强现实(Augmented Reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。
可以理解的是,本系统中还可以包括多个终端设备,多个终端设备均可同时通过WiFi或5G网络接网络设备。网络设备可以同时向多个终端设备提供VR内容。
本实施例提供的VR系统能够依托无线网络,将复杂的渲染和计算放到云端,大幅度降低对终端性能的要求,降低VR终端成本,加速VR的普及。对终端设备的要求只需网络连接、数据处理以及画面显示能力等基本能力,且能获得良好的体验,用户消费门槛大大降低。同时,本实施例提供的VR系统还能够实现交互式VR内容的实时云渲染,使得用户在任何时间、任何地点以流媒体方式使用VR业务,且多用户共享硬件资源,节省用户投资。强大的云端服务器大幅提升了计算能力和图像处理能力,无处不在的移动宽带网络提供了更为自然的业务模式,同时依托云端的快速迭代能力,内容开发者可以更快更好的进行内容发布。硬件开发商不再需要把复杂的运算功能放在客户端,终端的成本下降;终端用户不再需要购买主机,获取成本下降;并且只需以流媒体的形式,通过和用手机看视频一样简单的方式来体验VR。
图3是本申请VR数据处理方法实施例一的流程示意图,如图3所示的方法用于在图2所示的系统中实现。如图3所示的VR数据处理方法包括:
S101:终端设备向网络设备发送第一地理位置,网络设备接收终端设备发送的第一地理位置。
具体地,在S101中,终端设备需要首先获取终端设备所处的第一地理位置后,将第一地理位置发送给网络设备。其中,本申请中的地理位置指基于广域的地理位置定位,与现有技术中获取VR终端设备的相对位置不同。
可选地,终端设备可以通过内置定位模块的方式实时获取终端设备所在的广域地理位置坐标。定位模块优选地可以是基于实时动态(Real-time kinematic,RTK)算法的全球定位系统(Global Positioning System,GPS)芯片。
终端设备在图2所示的系统中通过WiFi或5G网络连接网络设备,终端设备与网络设备可以通过WiFi或5G网络进行相互的无线通信,因此终端设备可以将第一地理位置通过WiFi或5G网络以无线通信的方式发送至网络设备,实现终端设备与网络设备的IP化传输。而为了保证终端设备地理位置数据的安全,终端设备可以将第一地理位置信息进行加密后发送至网络设备。
S102:网络设备确定与第一地理位置对应的第一VR数据。
具体地,由于现有技术中网络设备确定的终端设备的VR内容较为单一,并没有考虑终端设备所处的地理位置,处于不同地理位置的用户通过终端设备所看到的内容相同。例如:在位于北京的A公司提供的VR系统中,VR内容中的虚拟小屋的窗外为天安门。则不论北京的用户或者上海的用户使用VR终端设备观看VR内容时时,看到的虚拟小屋的窗外均为天安门。但是,处于上海的用户此时希望能够在虚拟小屋的窗外看到上海的外滩,由于网络设备向终端设备提供的VR内容限定于天安门,用户所希望的基于地理位置的内容显示并不能实现。
因此,在本步骤中,当网络设备接收到S101中的终端设备的第一地理位置,则根据第一地理位置确定对应的第一VR数据。其中,第一VR数据与第一地理位置唯一对应,不同的地理位置对应不同的VR数据。例如,在上述的示例中,当用户在北京使用终端设备时,终端设备会上报网络设备其地理位置为北京,则网络设备根据地理位置在北京确定向该终端设备显示的VR内容的虚拟小屋窗外为天安门;而当用户在上海使用终端设备时,网络设备根据地理位置在上海确定向该终端设备显示的VR内容的虚拟小屋窗外为外滩。
进一步地,在上述步骤中,S102中网络设备确定与第一位置对应的第一VR数据的一种可能的实现方式可以包括:获取与第一地理位置对应的第一虚拟素材;渲染第一虚拟素材得到第一VR数据。则此时,第一VR数据指的是经过渲染后的视频流,网络设备将虚拟素材渲染后得到的视频流发送至终端设备,终端设备将接收到的视频流进行显示。
特别地,当第一虚拟素材为全景素材时,则本实施例中的渲染第一虚拟素材得到第一VR数据,包括:渲染全景素材得到第一VR数据。例如:网络设备根据终端设备发送的第一地理位置为某古建筑时,第一虚拟素材为该古建筑在古时的虚拟场景;网络设备根据终端设备发送的第一地理位置为海边时,第一虚拟素材为海底世界的虚拟场景。则网络设备获取的古建筑或者海底世界的虚拟全景素材后,根据终端设备的姿态数据对该虚拟场景进行渲染得到第一VR数据,并将第一VR数据发送至终端设备后,终端设备可以显示第一VR 数据的内容。
而当第一虚拟素材为特定素材时,则本实施例中的渲染第一虚拟素材得到第一虚拟现实VR数据,包括:获取第一地理位置对应的特定素材和终端设备请求的全景素材;将特定素材加入固定素材;渲染加入特定素材的固定素材,得到第一VR数据。例如:网络设备向终端设备提供的VR内容为虚拟小屋,虚拟小屋即为VR内容中的固定素材不会进行改变。当终端设备向网络设备发送的第一地理位置为北京时,网络设备确定第一虚拟素材为天安门,则网络设备将天安门加入到虚拟小屋的固定素材中,使得虚拟小屋的窗外为天安门。网络设备随后根据终端设备的姿态数据对加入天安门的室内场景进行渲染得到第一VR数据,并将第一VR数据发送至终端设备后,终端设备可以显示第一VR数据的内容。
具体地,在本实施例中,S102具体可以包括:从存储设备中获取第一地理位置对应的第一虚拟素材,存储设备中存储N个地理位置所对应的虚拟素材,N为正整数。其中,本实施例中提供的网络设备优选地可以由两部分组成,一部分为计算云,用于对数据进行分析与计算,而另一部分为存储云,专门用于存储不同地理位置对应的虚拟素材,使得网络设备可以从其本地获取不同地理位置对应的虚拟素材。
而进一步地,若存储设备中不存在第一地理位置对应的虚拟素材,则从存储设备中获取第二地理位置对应的第二虚拟素材,其中,第二地理位置的范围大于第一地理位置的范围。例如:图4是本申请地理位置范围的示意图。如图4所示,当第一地理位置为地点A时,网络设备自适应渲染基于地点A对应的虚拟素材;而如果地点A没有对应的虚拟素材,可以设置一定的阈值范围,渲染第二地理位置地点A1或地点A2对应的虚拟素材。其中,第二地理位置的范围大于第一地理位置的范围,并且第二地理位置包括第一地理位置。如果在阈值范围内均没有地理位置的素材,则渲染常规的固定素材。当终端设备的第一地理位置由地点A移动到地点B,自适应渲染基于地点B对应的虚拟素材,如果地点B对应的虚拟素材,可以设置一定的阈值范围,渲染地点B1或地点B2对应的虚拟素材。如果在阈值范围内均没有地理位置的素材,则渲染常规的固定素材。
例如:网络设备向终端设备提供的VR内容为虚拟小屋,虚拟小屋窗外的画面为大海,虚拟小屋即为VR内容中的固定素材不会进行改变。当终端设备位于北京市时,网络设备确定第一虚拟素材为天安门,则网络设备将天安门加入到虚拟小屋的固定素材中,使得虚拟小屋的窗外为天安门。若网络设备中未存储北京市对应的第一虚拟素材,则扩大地理位置的选择范围,确定第一虚拟素材采用中国对应的长城,则网络设备将长城加入到虚拟小屋的固定素材中,使得虚拟小屋的窗外为长城。如果网络设备中未存储任何中国地理位置对应的第一虚拟素材,则确定第一虚拟素材为固定素材中的内容,即虚拟小屋的窗外的画面为大海。
S103:网络设备向终端设备发送第一VR数据,终端设备接收网络设备发送的第一VR数据。S104:终端设备显示第一VR数据对应的内容。
具体地,本实施例中,由于网络设备和终端设备通过WiFi或5G网络连接。则终端设备可以通过WiFi或5G网络将终端设备的运动姿态数据和第一地理位置发送至网络设备。当网络设备根据终端设备的运动姿态以及地理位置渲染生成第一VR数据后,也可以通过WiFi或5G网络将第一VR数据直接发送至终端设备。此时,第一VR数据指的是经过渲染后的视频流例如YUV/VGA数据。则为了对视频流进行IP化的传输,由网络设备对VR数据 进行编码压缩和IP化后通过WiFi或5G网络下发至终端设备,终端设备对收到的数据进行解码和解压后显示VR内容。
可选地,为了实现终端设备的上述功能,可以通过在终端设备内设置编解码/压缩解压模块的方式,对收到的经过编码和压缩处理的第一VR数据进行编码和解压处理。
例如:图5是本申请VR数据处理方法实施例二的流程示意图。如图5所示的实施例在图3所示的实施例基础上,在S102之后网络设备向终端设备通过WiFi或5G网络发送VR数据的方式,其中包括:S301:网络设备对第一VR数据进行编码和压缩处理。S302:网络设备通过无线通信方式向终端设备发送经过编码和压缩处理后的第一VR数据。S303:终端设备对经过编码和压缩处理的第一VR数据进行编码和解压处理。S304:终端设备显示解码和解压处理后的第一VR数据。
进一步地,图6是本申请VR数据处理方法实施例三的流程示意图。如图6所示的实施例在图3所示的实施例基础上,在S101之后具体还包括:S201:获取终端设备发送的偏好信息,偏好信息用于指示是否根据预设类别渲染第一虚拟素材;S202:若偏好信息指示根据预设类别渲染第一虚拟素材,则根据预设类别渲染第一虚拟素材得到第一VR数据。
具体地,本实施例中除了网络设备直接根据终端设备的地理位置虚拟素材,还可以根据终端设备的用户的偏好需求确定是否启用基于地理位置的方式以预设类别渲染虚拟素材,如果不启用,则按常规的固定素材进行渲染。
可选地,用户偏好选择机制可以是用户可以根据终端上的选项选择偏好的场景类别,或者具体渲染物类别。例如:图7是本申请偏好信息中预设类别的示意图。如图6所示,终端设备可以提供给用户选择的四种基于地理位置的预设类别选项,并根据用户的选择生成偏好信息后发送至网络设备,使得网络设备确定是否启用根据某一类别渲染虚拟素材。
以VR内容为虚拟房间的空间为例:类别一可以指天气基于地理位置不同而不同,启用后在地点A实际天气在下雨,自适应渲染下着雨的天空和地面,在虚拟房间中看到窗外在下雨;到了地点B实际天气在下雪,自适应渲染下着雪的天空和地面,在虚拟房间中可以走动到屋外感受雪花飘落。如果不启用,在任何地点都是固定的晴天场景。类别二可以指房间里的宠物基于地理位置不同而不同,启用后在地点A的时候是房间里有一直小猫在跑,到了地点B换成了小狗在房间里叫,如果不喜欢宠物,可以选择不启用,房间里便不会出现宠物。类别三可以指房间里的一幅画,如果不启用基于地理位置的自适应机制,这这幅画一直是不会变化的,启用后,这幅画随地点不同呈现不同的画面。类别四可以是房间外的风景,如果启用类别四,房间外便会基于地点切换不同的风景,比如地点A切换到海边的沙滩,在地点B切换到山间的瀑布,不启用则只是常规的固定场景。
综上,更为具体地,图8是本申请VR数据处理系统一实施例的结构示意图。图8所示的系统可以用于执行上述各实施例中的VR数据处理方法。其中,上述实施例中的终端设备以头显为例,头显除了包括现有技术中的惯性(Inertial sensor,IMU)传感器和Inside-out摄像头之外,本实施例中的头显还加入了GPS定位模块和编解码/压缩解压模块,头显设备增加的定位模块(如基于RTK算法的GPS等)用于获取广域地理位置坐标,编解码/压缩解压模块用于处理视频流。上述实施例中的网络设备以云端服务器为例,云端服务器内包括计算云和存储云,计算云用于对数据进行分析与计算,存储云提供基于广域地理位置的素材,用于存储不同地理位置对应的虚拟素材。云端计算云对动作数据计算 处理,基于广域地理位置自适应请求虚拟素材并渲染、编码/压缩,头显和计算云实现IP化传输。并且本实施例中的头显与云端服务器可以采用客户/服务器Client-server模式实现多用户共享主机资源。
图9是本申请VR数据处理系统一实施例的结构示意图。图9所示的系统中更为具体地显示了图8的系统中各关键模块及其处理流程。其中,101为头显端,集成了IMU、Inside-out摄像头和定位模块,可以同时获取头显的相对位置和实际地理位置。其中IMU获得头显的3自由度姿态数据;Inside-out摄像头无外部传感器,可在空间自由移动,获得6自由度动作数据;GPS定位模块获得实际地理位置的经纬度坐标数据。102为计算云,完成位置姿态数据解码/解压,坐标计算匹配,采用基于地理位置的自适应机制及用户偏好选择机制,向存储云发送资源请求;同时基于存储云检索到的资源数据进行渲染,对渲染后的原始数据进行编码/压缩。103为存储云,提供基于广域地理位置的应用及素材:包括各平台的虚拟Home以及第三方的应用如全景视频类、互动式体验、交互类游戏等应用,提供固定的渲染素材以及基于地理经纬度坐标的场景构建素材资源,包括模型、材质等。104为将头显上IMU获得的3自由度姿态数据、Inside-out摄像头获得的6自由度动作数据、定位模块获得的经纬度坐标数据,经模数转换、编码/压缩后,通过网络比如5G或WiFi网络上传至计算云。105为计算云基于地理经纬度坐标及动作姿态数据,以及用户偏好选择的指令,向存储云发送资源请求。106为第三方存储云检索素材资源数据,包括模型、材质等。107为计算云对资源数据渲染、编码/压缩的数据后,通过网络比如5G或WiFi网络下发至头显端。
图10是本申请VR数据处理方法实施例四的流程示意图。图10示出了结合上述各实施例中VR数据方法的一种可能的实施例。如图10所示,本实施例包括:201开始,戴上头显,进入虚拟Home或进入第三方应用。202动作交互实时刷新:头部转动,手柄变化或局域走动、动作指令交互,或广域地理位置变化。203将位置信息(经纬度数据、6自由度动作数据、3自由度姿态数据)上传至计算云,完成位置数据解码,坐标计算匹配。204用户偏好选择机制,用户还可以根据终端上的选项选择的用户偏好的场景类别,或者具体渲染物类别。205计算云向存储云发送资源请求,基于地理位置阈值,判断是否有基于地理经纬度的素材。206如果没有满足阈值范围内基于地理经纬度的素材,存储云基于姿态和动作数据检索固定素材。207如果有基于地理经纬度的素材,存储云基于姿态和动作数据检索全部基于地理经纬度的素材或是部分基于地理经纬度部分固定素材;如果有用户偏好的素材信息,则根据该信息进行选择和渲染208判断素材携带的信息,是否需要叠加到真实地理位置坐标上,即素材携带坐标信息与真实地理坐标一致。209坐标转换,使虚拟场景叠加到现实空间的正确位置上。210计算云进行渲染视野范围内的画面,对原始YUV/VGA数据进行编码压缩和IP化后下发到头显端。211头显端收到数据进行解码/解压后,显示图像。
综上,本实施例提供的VR数据处理方法、装置及存储介质,能够实现虚拟内容地理差异化的VR数据处理方式,通过业务流程的创新,在头显端增加GPS定位模块、编解码/压缩解压模块;动作数据上传到云端,计算、渲染由本地改到云端实现,并增加基于地理位置的自适应机制及用户偏好选择机制;在内容端,存储云增加基于广域地理位置的应用及素材,进而丰富了VR向用户呈现的虚拟内容。并具体实现了如下效果:相比于传统方案 的增益体现在内容呈现上可以差异化,比如在不同地点,不同城市,渲染素材不同的,呈现的VR内容是不同的。还实现了基于云端自适应的自适应渲染机制,把本地复杂的图像处理搬到云上,实现交互式VR内容的实时云渲染,用户在任何时间、任何地点以流媒体方式使用VR业务,且多用户共享硬件资源,节省用户投资。
同时,本申请不同于目前的固定的渲染素材,云端基于广域地理位置数据向头显端自适应传送地理差异化的渲染组合。对采用基于广域地理位置的自适应虚拟内容的云端渲染机制的应用设计有参考意义。本申请还提出了云端虚拟化架构,集计算处理、自适应资源调度、自适应渲染、编码/压缩于一身的云端调度机制。实现多用户共享主机硬件资源,节省用户终端投资。本申请还对基于广域地理位置的应用场景如城市规划、建筑设计、旅游、教育等具有巨大的商业价值。
此外,申请对基于广域地理位置的应用场景如虚拟小屋、全景视频类、游戏类,内容基于地理位置呈现差异化可增加趣味性。例如:差异化增益场景列举:1)对于虚拟小屋,内容呈现差异化后,在不同的城市可以渲染不同的虚拟场景,如在上海,微软的悬崖小屋就切换成了外滩边的小屋;室内装饰如墙上一幅画或者宠物也可以基于地理位置自适应切换。2)对于全景视频类应用,小屋也可以基于地理位置自适应切换,而且基于广域地理位置呈现的视频内容也有不同。3)对于互动式体验,如建筑设计类应用,走到某个地点,戴上头显看到相应地理位置的规划设计。如旅游或教育类应用,走到某一历史景点,戴上头显重现基于广域实际位置建模的历史场景,有瞬间穿越的感觉。4)对于交互类游戏,如在不同的地域获取不同的道具,不同的地点游戏情节走向不同,增加趣味性等。
图11是本申请VR数据处理装置实施例一的结构示意图。本实施例中提供的VR数据处理装置可以是上述任一实施例中的网络设备。如图11所示,本实施例提供的VR数据处理装置包括:获取模块1101,确定模块1102和发送模块1103。其中,获取模块1101用于获取终端设备的第一地理位置;确定模块1102用于确定与第一地理位置对应的第一虚拟现实VR数据;发送模块1103用于向终端设备发送第一VR数据,其中,第一VR数据对应的内容用于在终端设备显示。
本实施例提供的VR数据处理装置用于执行图3所示的VR数据处理方法,其具体实现方式与原理相同,不再赘述。
可选地,确定模块1102具体用于,获取与第一地理位置对应的第一虚拟素材;渲染第一虚拟素材得到第一VR数据。
可选地,获取模块1101还用于,获取终端设备发送的偏好信息,偏好信息用于指示是否根据预设类别渲染第一虚拟素材;
可选地,确定模块1102具体用于,若偏好信息指示根据预设类别渲染第一虚拟素材,则根据预设类别渲染第一虚拟素材得到第一VR数据。
可选地,确定模块1102具体用于,从存储设备中获取第一地理位置对应的第一虚拟素材,其中,存储设备中存储N个地理位置所对应的虚拟素材,N为正整数。
可选地,确定模块1102具体用于,若存储设备中不存在第一地理位置对应的虚拟素材,则从存储设备中获取第二地理位置对应的第二虚拟素材,其中,第二地理位置的范围大于第一地理位置的范围。
可选地,第一虚拟素材为全景素材;则确定模块1102具体用于,渲染全景素材得到第 一VR数据。
可选地,第一虚拟素材为特定素材;则确定模块1102具体用于,获取第一地理位置对应的特定素材和终端设备请求的全景素材;将特定素材加入固定素材;渲染加入特定素材的固定素材,得到第一VR数据。
可选地,发送模块1103具体用于,对第一VR数据进行编码和压缩处理;通过无线通信方式向终端设备发送编码和压缩处理后的第一VR数据。
本实施例提供的VR数据处理装置用于执行前述实施例中所示的VR数据处理方法,其具体实现方式与原理相同,不再赘述。
图12是本申请VR数据处理装置实施例二的结构示意图。本实施例中提供的VR数据处理装置可以是上述任一实施例中的终端设备。如图12所示,本实施例提供的VR数据处理装置包括:获取模块1201,发送模块1202,接收模块1203和处理模块1204。其中,获取模块1201用于,获取终端设备的第一地理位置;发送模块1202用于,向网络设备发送第一地理位置,第一地理位置用于网络设备确定与第一位置对应的第一虚拟现实VR数据;接收模块1203用于,接收网络设备发送的第一VR数据;处理模块1204用于,显示第一VR数据对应的内容。
本实施例提供的VR数据处理装置用于执行图3所示的VR数据处理方法,其具体实现方式与原理相同,不再赘述。
可选地,发送模块1202还用于,向网络设备发送偏好信息,偏好信息用于网络设备在获取与地理位置对应的第一虚拟素材后,指示网络设备是否按照预设类别渲染第一虚拟素材。
可选地,接收模块1203具体用于,通过无线通信方式接收网络设备发送的经过编码和压缩处理的第一VR数据。
可选地,处理模块1204具体用于,对经过编码和压缩处理的第一VR数据进行解码和解压处理;显示解码和解压处理后的第一VR数据。
本实施例提供的VR数据处理装置用于执行前述实施例中所示的VR数据处理方法,其具体实现方式与原理相同,不再赘述。
本申请还提供一种VR数据处理系统,包括如N个上述图10实施例中任一项所述的网络设备和M个如上述图11实施例中任一项所述的终端设备,M和N为正整数。
本申请还提供一种VR数据处理装置,包括:处理器和存储器,存储器用于存储程序;处理器用于调用存储器所存储的程序,以执行如上述实施例中任一的VR数据处理方法。
本申请还提供一种计算机可读存储介质,计算机可读存储介质中存储程序代码,当程序代码被执行时,以执行如如上述实施例中任一的VR数据处理方法。
本申请还提供一种计算机程序产品,计算机程序产品包含的程序代码被处理器执行时,实现如上述实施例中任一的VR数据处理方法。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (14)

  1. 一种虚拟现实VR数据处理方法,其特征在于,包括:
    获取终端设备的第一地理位置;
    确定与所述第一地理位置对应的第一虚拟现实VR数据;
    向所述终端设备发送所述第一VR数据,其中,所述第一VR数据对应的内容用于在所述终端设备显示。
  2. 根据权利要求1所述的方法,其特征在于,所述确定与所述第一地理位置对应的第一VR数据,包括:
    获取与所述第一地理位置对应的第一虚拟素材;
    渲染所述第一虚拟素材得到所述第一VR数据。
  3. 根据权利要求2所述的方法,其特征在于,还包括:
    获取所述终端设备发送的偏好信息,所述偏好信息用于指示是否根据预设类别渲染所述第一虚拟素材;
    所述渲染所述第一虚拟素材得到第一VR数据,包括:
    若所述偏好信息指示根据预设类别渲染所述第一虚拟素材,则根据所述预设类别渲染所述第一虚拟素材得到所述第一VR数据。
  4. 根据权利要求2所述的方法,其特征在于,所述确定与所述第一地理位置对应的第一虚拟素材,包括:
    从存储设备中获取所述第一地理位置对应的第一虚拟素材,其中,所述存储设备中存储N个地理位置所对应的虚拟素材,所述N为正整数。
  5. 根据权利要求4所述的方法,其特征在于,所述从存储设备中获取所述第一地理位置对应的第一虚拟素材,包括:
    若所述存储设备中不存在所述第一地理位置对应的虚拟素材,则从所述存储设备中获取所述第二地理位置对应的第二虚拟素材,其中,所述第二地理位置的范围大于所述第一地理位置的范围。
  6. 根据权利要求2所述的方法,其特征在于,所述第一虚拟素材为全景素材;
    则所述渲染所述第一虚拟素材得到第一VR数据,包括:
    渲染所述全景素材得到所述第一VR数据。
  7. 根据权利要求2所述的方法,其特征在于,所述第一虚拟素材为特定素材;
    则所述渲染所述第一虚拟素材得到第一虚拟现实VR数据,包括:
    获取所述第一地理位置对应的特定素材和所述终端设备请求的全景素材;
    将所述特定素材加入所述固定素材;
    渲染加入所述特定素材的所述固定素材,得到所述第一VR数据。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述向所述终端设备发送所述第一VR数据,包括:
    对所述第一VR数据进行编码和压缩处理;
    通过无线通信方式向所述终端设备发送编码和压缩处理后的所述第一VR数据。
  9. 一种虚拟现实VR数据处理方法,其特征在于,包括:
    获取终端设备的第一地理位置;
    向网络设备发送所述第一地理位置,所述第一地理位置用于所述网络设备确定与所述第一位置对应的第一虚拟现实VR数据;
    接收所述网络设备发送的第一VR数据;
    显示所述第一VR数据对应的内容。
  10. 根据权利要求9所述的方法,其特征在于,还包括:
    向所述网络设备发送偏好信息,所述偏好信息用于所述网络设备在获取与所述地理位置对应的第一虚拟素材后,指示所述网络设备是否按照预设类别渲染所述第一虚拟素材。
  11. 根据权利要求9或10所述的方法,其特征在于,所述接收所述网络设备发送的第一VR数据,包括:
    通过无线通信方式接收所述网络设备发送的经过编码和压缩处理的所述第一VR数据。
  12. 根据权利要求11所述的方法,其特征在于,所述显示所述第一VR数据对应的内容,包括:
    对经过编码和压缩处理的所述第一VR数据进行解码和解压处理;
    显示解码和解压处理后的所述第一VR数据。
  13. 一种虚拟现实VR数据处理装置,其特征在于,包括:
    处理器和存储器;
    所述存储器,用于存储程序;
    所述处理器,用于调用所述存储器所存储的程序,以执行如权利要求1-12中任一所述的VR数据处理方法。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储程序代码,当所述程序代码被执行时,以执行如权利要求1-12中任一所述的VR数据处理方法。
PCT/CN2018/091945 2018-06-20 2018-06-20 虚拟现实vr数据处理方法、装置及存储介质 WO2019241925A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/091945 WO2019241925A1 (zh) 2018-06-20 2018-06-20 虚拟现实vr数据处理方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/091945 WO2019241925A1 (zh) 2018-06-20 2018-06-20 虚拟现实vr数据处理方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019241925A1 true WO2019241925A1 (zh) 2019-12-26

Family

ID=68983136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091945 WO2019241925A1 (zh) 2018-06-20 2018-06-20 虚拟现实vr数据处理方法、装置及存储介质

Country Status (1)

Country Link
WO (1) WO2019241925A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542849A (zh) * 2021-07-06 2021-10-22 腾讯科技(深圳)有限公司 视频数据处理方法及装置、电子设备、存储介质
CN113721874A (zh) * 2021-07-29 2021-11-30 阿里巴巴(中国)有限公司 虚拟现实画面显示方法及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010206A (zh) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 基于地理位置的虚拟现实视频播放的方法和系统
CN105450736A (zh) * 2015-11-12 2016-03-30 小米科技有限责任公司 与虚拟现实连接的方法和装置
US20170046878A1 (en) * 2015-08-13 2017-02-16 Revistor LLC Augmented reality mobile application
CN106527713A (zh) * 2016-11-07 2017-03-22 金陵科技学院 Vr的三维数据渲染系统及其方法
CN106954092A (zh) * 2017-03-09 2017-07-14 华东师范大学 一种基于云计算的虚拟现实自行车实现方法
CN107683449A (zh) * 2015-04-10 2018-02-09 索尼互动娱乐股份有限公司 控制经由头戴式显示器呈现的个人空间内容

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104010206A (zh) * 2014-06-17 2014-08-27 合一网络技术(北京)有限公司 基于地理位置的虚拟现实视频播放的方法和系统
CN107683449A (zh) * 2015-04-10 2018-02-09 索尼互动娱乐股份有限公司 控制经由头戴式显示器呈现的个人空间内容
US20170046878A1 (en) * 2015-08-13 2017-02-16 Revistor LLC Augmented reality mobile application
CN105450736A (zh) * 2015-11-12 2016-03-30 小米科技有限责任公司 与虚拟现实连接的方法和装置
CN106527713A (zh) * 2016-11-07 2017-03-22 金陵科技学院 Vr的三维数据渲染系统及其方法
CN106954092A (zh) * 2017-03-09 2017-07-14 华东师范大学 一种基于云计算的虚拟现实自行车实现方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542849A (zh) * 2021-07-06 2021-10-22 腾讯科技(深圳)有限公司 视频数据处理方法及装置、电子设备、存储介质
CN113721874A (zh) * 2021-07-29 2021-11-30 阿里巴巴(中国)有限公司 虚拟现实画面显示方法及电子设备

Similar Documents

Publication Publication Date Title
US11563779B2 (en) Multiuser asymmetric immersive teleconferencing
EP3695383B1 (en) Method and apparatus for rendering three-dimensional content
WO2018171487A1 (zh) 一种全景视频的播放方法和客户端
EP3180911B1 (en) Immersive video
CN106873768B (zh) 一种增强现实方法、装置及系统
CN110557625A (zh) 虚拟形象直播方法、终端、计算机设备及存储介质
CN112470483A (zh) 多服务器云虚拟现实(vr)流式传输
KR20200038561A (ko) 구면 비디오 편집
WO2018064331A1 (en) Streaming volumetric video for six degrees of freedom virtual reality
CN102301397A (zh) 用于提供计算机生成的三维虚拟环境的视频表示的方法和设备
CN110024404B (zh) 编码全局旋转运动补偿图像的方法,设备和流
WO2019241925A1 (zh) 虚拟现实vr数据处理方法、装置及存储介质
CN111355944A (zh) 生成并用信号传递全景图像之间的转换
US20170221174A1 (en) Gpu data sniffing and 3d streaming system and method
WO2019034804A2 (en) THREE-DIMENSIONAL VIDEO PROCESSING
KR20210084248A (ko) Vr 컨텐츠 중계 플랫폼 제공 방법 및 그 장치
JP7447266B2 (ja) ボリュメトリック画像データに関するビューの符号化及び復号
JP7430411B2 (ja) 3dオブジェクトのストリーミング方法、装置、及びプログラム
CN111064985A (zh) 一种实现视频串流的系统、方法及装置
US20230316670A1 (en) Volumetric immersion system & method
US20240022688A1 (en) Multiuser teleconferencing with spotlight feature
JP2022032838A (ja) 送信装置、受信装置、ネットワークノード、及びプログラム
CN118118717A (zh) 屏幕共享方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923598

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923598

Country of ref document: EP

Kind code of ref document: A1