CN115543084A - Virtual reality system with distributed rendering function and distributed rendering method - Google Patents

Virtual reality system with distributed rendering function and distributed rendering method Download PDF

Info

Publication number
CN115543084A
CN115543084A CN202211212004.7A CN202211212004A CN115543084A CN 115543084 A CN115543084 A CN 115543084A CN 202211212004 A CN202211212004 A CN 202211212004A CN 115543084 A CN115543084 A CN 115543084A
Authority
CN
China
Prior art keywords
tracking information
virtual reality
rendering
information
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211212004.7A
Other languages
Chinese (zh)
Inventor
翁志彬
杨九丹
周克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pimax Technology Shanghai Co ltd
Original Assignee
Pimax Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pimax Technology Shanghai Co ltd filed Critical Pimax Technology Shanghai Co ltd
Priority to CN202211212004.7A priority Critical patent/CN115543084A/en
Publication of CN115543084A publication Critical patent/CN115543084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a virtual reality system with a distributed rendering function and a distributed rendering method, and solves the technical problems that in the prior art, due to the fact that data processing amount is large, a VR display picture is jammed, and the like, and user experience is reduced. According to the virtual reality system and the distributed rendering method provided by the invention, the server is in communication connection with the virtual reality host in the at least one virtual device by setting the server, the virtual reality host sends the tracking information group to the server, the server renders the data in the tracking information group to generate the picture information, and the virtual reality host controls the display screen to display the picture information, namely, a part of data which needs to be processed by the virtual reality host is shared to the server by using a distributed rendering technology, so that the data processing pressure of the virtual reality host is reduced, the picture blocking probability caused when the virtual reality host executes the rendering function is reduced, and the experience of a user is improved.

Description

Virtual reality system with distributed rendering function and distributed rendering method
Technical Field
The invention relates to the technical field of virtualization, in particular to a virtual reality system with a distributed rendering function and a distributed rendering method.
Background
In recent years, with the progress of science and technology, virtual Reality (VR) technology has been widely used in the fields of education, industry, entertainment, medical care, and the like. When Virtual Reality (VR) equipment presents a three-dimensional scene or an object for a user, the three-dimensional scene or the object is rendered into a two-dimensional image on a hardware bottom layer, the two-dimensional image is uploaded to a software layer, and after the two-dimensional image is distorted by the software layer, the distorted two-dimensional image is projected to a screen.
In general, a host of the VR device needs to render three-dimensional scene information from a mobile phone and information of an object in the three-dimensional scene, the rendering workload is large due to a large data amount, and meanwhile, the host of the VR device has a large data processing amount and cannot process some data in time when processing the data, so that a VR display screen is jammed, and the experience of a user is reduced.
Disclosure of Invention
In view of this, the present invention provides a virtual reality system with a distributed rendering function and a distributed rendering method, which solve the technical problem in the prior art that a host of a VR device cannot process some data before processing the data due to a large data processing amount, so that a VR display screen is jammed and the like, thereby reducing user experience.
As a first aspect of the present application, the present invention provides a virtual reality system having a distributed rendering function, including: at least one virtual reality device and a server; wherein, the virtual reality equipment includes: a head-mounted device having a display screen; an eyeball tracker for tracking eyeball tracking information of a user; detecting means for tracking user motion tracking information of the user; the virtual reality host is in communication connection with the display screen, the eyeball tracker and the detection device respectively, and is used for acquiring a tracking information set of a user, and the tracking information set comprises a host number, eyeball tracking information and user action tracking information; the server is in communication connection with at least one virtual reality host, and is used for receiving eyeball tracking information and user action tracking information in a tracking information group transmitted by the virtual reality host to render, generating picture information and transmitting the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group; and the virtual reality host is also used for controlling the display screen to display the picture information according to the picture information.
In an embodiment of the present invention, the server includes: the rendering unit is in communication connection with the virtual reality host, and is used for receiving the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host, rendering the eyeball tracking information and the user action tracking information and generating picture information; and the data transmission unit is in communication connection with the rendering unit, receives the picture information transmitted by the rendering unit and the host number corresponding to the picture information, and sends the picture information to the virtual reality host corresponding to the host number according to the host number.
In an embodiment of the present invention, the server further includes: the calibration unit is in communication connection with the rendering unit and is used for receiving the picture information transmitted by the rendering unit and the host number corresponding to the picture information, calibrating the picture information and generating calibration picture information and a host number corresponding to the calibration picture information; the virtual reality host is used for controlling the display screen to display the calibration picture information according to the calibration picture information.
In an embodiment of the present invention, the server further includes: the storage unit is respectively in communication connection with the rendering unit and the virtual reality host, and is used for receiving the tracking information group transmitted by the virtual reality host, storing the tracking information group and sending the tracking information group to the rendering unit.
In an embodiment of the present invention, the user motion tracking information includes: gesture tracking information of the user; and/or head tracking information of the user; and/or tracking information of the user's control handle.
In an embodiment of the present invention, the storage unit includes: the first storage module is in communication connection with the virtual reality host, and is used for receiving eyeball tracking information in the tracking information group transmitted by the virtual reality host and storing the eyeball tracking information; the second storage module is in communication connection with the virtual reality host and is used for receiving the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host and storing the gesture tracking information of the user; the third storage module is in communication connection with the virtual reality host, and is used for receiving the head tracking information of the user in the tracking information group transmitted by the virtual reality host and storing the head tracking information of the user; and/or a fourth storage module, the fourth storage module is in communication connection with the virtual reality host, and the fourth storage module is used for receiving the tracking information of the control handle in the tracking information group transmitted by the virtual reality host and storing the tracking information of the control handle.
In an embodiment of the present invention, the rendering unit includes: the first rendering module is in communication connection with the first storage module, and is configured to receive the eyeball tracking information transmitted by the first storage module, render the eyeball tracking information in the eyeball tracking information group, and generate first rendering information; the second rendering module is in communication connection with the second storage module and is used for receiving the gesture tracking information of the user transmitted by the second storage module and rendering the gesture tracking information of the user to generate second rendering information; and/or a third rendering module, the third rendering module is in communication connection with the third storage module, and the third rendering module is configured to receive the head tracking information of the user transmitted by the third storage module, and render the head tracking information of the user to generate third rendering information; the fourth rendering module is in communication connection with the fourth storage module, and is used for receiving the tracking information of the control handle transmitted by the fourth storage module, rendering the tracking information of the control handle and generating fourth rendering information; and the integration module is in communication connection with the calibration unit, the first rendering module, the second rendering module and/or the third rendering module and/or the fourth rendering module respectively, and is used for integrating the first rendering information, the second rendering information and/or the third rendering information and/or the fourth rendering information to generate picture information.
In an embodiment of the present invention, the server is a computer device; or the server is a cloud server.
As a second aspect of the present invention, the present invention provides a distributed rendering method for a virtual reality system, including: the virtual reality host acquires a tracking information group of a user, wherein the tracking information group comprises a host number, eyeball tracking information and user action tracking information; the server renders the eyeball tracking information and the user action tracking information in a tracking information group transmitted by the virtual reality host to generate picture information, and transmits the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group; and the virtual reality host controls the display screen to display the picture information according to the picture information.
In an embodiment of the present invention, the server includes a rendering unit and a data transmission unit; the method for processing the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host computer by the server comprises the following steps of rendering the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host computer by the server, generating picture information, and transmitting the picture information to the virtual reality host computer corresponding to the host computer number according to the host computer number in the tracking information group, wherein the method comprises the following steps: the rendering unit renders the eyeball tracking information and the user action tracking information in a tracking information group transmitted by the virtual reality host to generate picture information; and the data transmission unit transmits the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group.
In an embodiment of the present invention, the server further includes a calibration unit; wherein the distributed rendering method further comprises: the calibration unit receives the picture information sent by the rendering unit, calibrates the picture information and generates calibration picture information; wherein, the data transmission unit transmits the image information to the virtual reality host corresponding to the host number according to the host number in the tracking information group, and the method comprises the following steps: the data transmission unit transmits the calibration picture information to a virtual reality host corresponding to the host number according to the host number in the tracking information group; the virtual reality host controls the display screen to display the picture information according to the picture information, and the method comprises the following steps: and the virtual reality host controls the display screen to display the calibration picture information according to the calibration picture information.
In an embodiment of the present invention, the server further includes a storage unit, and the distributed rendering method further includes: and the storage unit receives the tracking information group transmitted by the virtual reality host and stores the tracking information group.
In an embodiment of the present invention, the user motion tracking information includes: gesture tracking information of the user; and/or head tracking information of the user; and/or tracking information of the user's control handle.
In an embodiment of the present invention, the storage unit includes a first storage module, and/or a second storage module, and/or a third storage module, and/or a fourth storage module; the storage unit receives the tracking information group transmitted by the virtual reality host, and stores the tracking information group, and the method comprises the following steps: the first storage module receives eyeball tracking information in the tracking information group transmitted by the virtual reality host and stores the eyeball tracking information; the second storage module receives the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host and stores the gesture tracking information of the user; and/or the third storage module receives the head tracking information of the user in the tracking information group transmitted by the virtual reality host, and stores the head tracking information of the user; and/or the fourth storage module receives the tracking information of the control handle in the tracking information group transmitted by the virtual reality host, and stores the tracking information of the control handle.
In an embodiment of the present invention, the rendering unit includes a first rendering module, and/or a second rendering module, and/or a third rendering module, and/or a fourth rendering module, and an integration module; the rendering unit renders the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host to generate picture information, and the rendering unit comprises: the first rendering module receives the eyeball tracking information in the tracking information group transmitted by the first storage module, renders the eyeball tracking information and generates first rendering information; the second rendering module receives the gesture tracking information in the tracking information group transmitted by the second storage module, renders the gesture tracking information and generates second rendering information; and/or the third rendering module receives the head tracking information in the tracking information group transmitted by the third storage module, renders the head tracking information and generates third rendering information; the fourth rendering module receives the tracking information of the control handle of the tracking information group transmitted by the fourth storage module, renders the tracking information of the control handle and generates fourth rendering information; and the integration module integrates the first rendering information, the second rendering information and/or the third rendering information and/or the fourth rendering information to generate picture information.
According to the virtual reality system and the distributed rendering method, the server is in communication connection with the virtual reality host in the at least one virtual device through the server, the virtual reality host sends the tracking information group to the server, the server renders data in the tracking information group to generate picture information, the virtual reality host controls the display screen to display the picture information, and in the whole rendering process, the server replaces the virtual reality host to complete the rendering process, namely, a part of data which needs to be processed by the virtual reality host is shared to the server through a distributed rendering technology, so that the data processing pressure of the virtual reality host is reduced, the picture blocking probability caused when the virtual reality host executes the rendering function is reduced, and the experience of a user is improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic diagram illustrating an operation of a virtual reality system with a distributed rendering function according to an embodiment of the present invention;
fig. 2 is a schematic flowchart illustrating a distributed rendering method performed by the virtual reality system with distributed rendering function shown in fig. 1;
fig. 3 is a schematic diagram illustrating an operation of a virtual reality system with a distributed rendering function according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating a distributed rendering method performed by the virtual reality system with distributed rendering shown in FIG. 3;
fig. 5 is a schematic diagram illustrating an operation of a virtual reality system with a distributed rendering function according to another embodiment of the present invention;
FIG. 6 is a flowchart illustrating a distributed rendering method performed by the virtual reality system with distributed rendering shown in FIG. 5;
fig. 7 is a schematic diagram illustrating an operation of a virtual reality system with a distributed rendering function according to another embodiment of the present invention;
FIG. 8 is a flowchart illustrating a distributed rendering method performed by the virtual reality system with distributed rendering shown in FIG. 7;
fig. 9 is a schematic diagram illustrating an operation of a virtual reality system with a distributed rendering function according to another embodiment of the present invention;
fig. 10 is a flowchart illustrating a distributed rendering method performed by the virtual reality system with distributed rendering function shown in fig. 9;
fig. 11 is a schematic diagram illustrating an operation of an electronic device according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise. All directional indicators in the embodiments of the present invention (such as upper, lower, left, right, front, rear, top, bottom … …) are only used to explain the relative position relationship between the components, the movement, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Furthermore, reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a virtual reality system with a distributed rendering function, fig. 1 is a working schematic diagram of the virtual reality system with the distributed rendering function according to an embodiment of the invention, fig. 2 is a flow diagram of the virtual reality system with the distributed rendering function shown in fig. 1 in executing a distributed rendering method, and as shown in fig. 1, the virtual reality system with the distributed rendering function includes: at least one virtual reality device 100 and a server 200; wherein, virtual reality device 100 includes: a head-mounted device 101, the head-mounted device 101 having a display screen 1011; an eyeball tracker 102, wherein the eyeball tracker 102 is used for tracking eyeball tracking information of a user; a detection device 103, wherein the detection device 103 is used for tracking user action tracking information of the user; the virtual reality host 104 is in communication connection with the display screen 1011, the eyeball tracker 102 and the detection device 103 respectively; wherein, the server 200 is connected with at least one virtual reality host 104 in a communication way.
Specifically, the virtual reality device 100 may be a split type machine, that is, the virtual reality host 104 and the head-mounted device 101 are two independent devices without a mechanical connection relationship, the head-mounted device 101 is installed with a display screen 1011, and there is no mechanical connection relationship between the display screen 1011 and the virtual reality host 104.
The virtual reality device 100 may also be an all-in-one machine, that is, the virtual reality host 104 in the virtual reality device 100 is a mobile terminal, and the mobile terminal has a display screen 1011 and is installed on the head-mounted device 101.
Specifically, the server may be a PC device, such as a computer device, or may be a cloud server.
Specifically, the detecting device 103 may be a device for detecting tracking information of a user or a virtual reality device, and the tracking information may play a role of tracking the user, so as to control the display content in the display screen 1011 in real time according to the tracking information. The detection means may be a means for detecting spatial positioning information of the control handle when the user operates the control handle, for example, the detection means may be an inertial sensor mounted on the control handle, which may detect 3DOF data of the control handle. Also for example: the detection device may include an inertial sensor mounted on the control handle (to detect 3DOF data of the control handle) and an image capturing device (to capture a spatial image of the control handle), and the virtual reality host 104 may calculate 6DOF data of the control handle according to the 3DOF data of the control handle detected by the inertial sensor and the spatial image of the control handle captured by the image capturing device, where the 6DOF data is tracking information of the control handle.
In the process of executing distributed rendering based on the virtual reality system with distributed rendering function shown in fig. 1, as shown in fig. 2, the method of distributed rendering includes the following steps:
step S10: the virtual reality host 104 acquires a tracking information group of the user, wherein the tracking information group comprises a host number, eyeball tracking information and user action tracking information;
during the actual operation of the virtual reality device, the eyeball tracker 102 tracks eyeball tracking information of the user, and the detection device 103 detects motion tracking information of the user when the user experiences the virtual reality device, for example, the detection device 103 may be an inertial sensor mounted on a control handle, and when the user operates the control handle, the inertial sensor may detect 3DOF data of the control handle. When the eye tracker 102 detects the eye tracking information of the user, the eye tracking information is transmitted to the virtual reality host 104 of the virtual reality device. Similarly, the detection device 103 detects the user motion tracking information of the user and transmits the user tracking information to the virtual reality host 104.
After receiving the eye tracking information detected by the eye tracker 102 and the user movement tracking information detected by the detection device 103, the virtual reality host 104 packages the eye tracking information, the host number of the virtual reality host 104, and the user movement tracking information into a tracking information group. Because the server corresponds to one or more virtual reality hosts, when the server corresponds to a plurality of virtual reality hosts, the tracking information group including the host number enables the server to identify the virtual host corresponding to one tracking information group, and when the server renders the information in the tracking information group, the server sends the picture information to the virtual reality host 104 corresponding to the host number.
Step S11: the server 200 renders eyeball tracking information and user action tracking information in the tracking information group transmitted by the virtual reality host 104, generates picture information, and transmits the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group;
when the server 200 receives the tracking information group sent by the virtual reality host 104, the eyeball tracking information and the user motion tracking information in the tracking information group are extracted, the eyeball tracking information and the user motion tracking information are rendered to generate the picture information, and the picture information is transmitted to the virtual reality host 104 corresponding to the host number according to the host number in the tracking information group.
Step S12: the virtual reality host 104 controls the display screen to display the picture information according to the picture information.
After receiving the picture information sent by the server, the virtual reality host 104 controls the display screen to display the picture information.
According to the virtual reality system and the distributed rendering method, the server is in communication connection with the virtual reality host in the at least one virtual device through the server, the virtual reality host sends the tracking information group to the server, the server renders data in the tracking information group to generate picture information, the virtual reality host controls the display screen to display the picture information, and in the whole rendering process, the server replaces the virtual reality host to complete the rendering process, namely, a part of data which needs to be processed by the virtual reality host is shared to the server through a distributed rendering technology, so that the data processing pressure of the virtual reality host is reduced, the picture blocking probability caused when the virtual reality host executes the rendering function is reduced, and the experience of a user is improved.
In an embodiment of the present invention, as shown in fig. 3, the server 200 specifically includes: the virtual reality system comprises a rendering unit 201 and a data transmission unit 202, wherein the rendering unit 201 is in communication connection with the virtual reality host 104 and the data transmission unit 202, and the virtual reality host 104 is in communication connection with the data transmission unit 202. In this case, as shown in fig. 4, step S11 (the server 200 renders the eyeball tracking information and the user movement tracking information in the tracking information group transmitted by the virtual reality host 104, generates screen information, and transmits the screen information to the virtual reality host corresponding to the host number according to the host number in the tracking information group) specifically includes the following steps:
step S110: the rendering unit 201 renders eyeball tracking information and user motion tracking information in the tracking information group transmitted by the virtual reality host 104, generates picture information, and transmits the picture information to the data transmission unit 202;
specifically, the rendering unit 201 may be directly connected to the virtual reality host 104 in a communication manner, and the virtual reality host 104 directly sends the tracking information group to the rendering unit 201.
Specifically, the data transmission unit 202 may be directly connected to the virtual reality host 104, and the data transmission unit 202 receives the tracking information group sent by the virtual reality host 104 and sends the tracking information group to the rendering unit 201.
After receiving the tracking information group transmitted by the virtual reality host 104 or the data transmission unit 202, the rendering unit 201 renders eyeball tracking information and user motion tracking information in the tracking information group to generate screen information.
Step S111: the data transmission unit 202 transmits the frame information to the virtual reality host 104 corresponding to the host number according to the host number in the tracking information group.
In an embodiment of the present invention, as shown in fig. 5, the server 200 further includes: a calibration unit 203, wherein the calibration unit 203 is communicatively connected to the rendering unit 201 and the data transmission unit 202. At this time, as shown in fig. 6, between step S11 and step S12, the distributed rendering method further includes the steps of:
step S112: the calibration unit 203 receives the picture information sent by the rendering unit 201, calibrates the picture information, and generates calibration picture information; that is, the calibration unit 203 needs to calibrate the picture information already rendered by the rendering unit 201, and generate the calibration picture information, so that the accuracy of the picture information is improved.
At this time, step S111 (the data transmission unit 202 transmits the frame information to the virtual reality host 104 corresponding to the host number according to the host number in the tracking information group) specifically includes:
and the data transmission unit transmits the calibration picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group.
Meanwhile, step S12 (the virtual reality host 104 controls the display screen to display the screen information according to the screen information) includes: and the virtual reality host controls the display screen to display the calibration picture information according to the calibration picture information.
In an embodiment of the present invention, as shown in fig. 7, the server 200 further includes: the storage unit 205, the storage unit 205 is respectively connected to the rendering unit 201 and the virtual reality host 104 in a communication manner. As shown in fig. 8, the distributed rendering method further includes the following steps:
step S13: the storage unit 205 receives the tracking information group transmitted by the virtual reality host 104, stores the tracking information group, and sends the tracking information group to the rendering unit 201.
In an embodiment of the present invention, the user motion tracking information may include one or more of the following information:
(1) Gesture tracking information of the user;
specifically, the gesture tracking information of the user refers to that when the user uses the virtual reality device and virtual interaction is realized by tracking the gesture, the motion and the shape of the hand are the gesture tracking information. Can shoot the motion and the shape of hand in order to form the gesture picture through camera device, camera device transmits the gesture picture to the virtual reality host computer, and the virtual reality host computer carries out the analysis to the gesture picture to form gesture tracking information, then the virtual reality host computer transmits gesture tracking information to the server, and the server is rendered gesture tracking information.
(2) Head tracking information of the user;
specifically, the head tracking information of the user refers to that when the user uses the virtual reality device and realizes virtual interaction by tracking the motion of the head, the motion of the head is the head tracking information.
When a user wears the head-mounted equipment, the image of the head of the user can be shot through the camera device to form a spatial image where the head of the user is located, the 3DOF data of the head of the user is measured through the inertial sensor mounted on the head-mounted equipment, the 3DOF data of the head is transmitted to the virtual reality host through the inertial sensor, the camera device transmits the spatial image of the head of the user to the virtual reality host, the virtual reality host calculates the 3DOF data of the head and the spatial image to generate head tracking information of the user, and the virtual reality host transmits the head tracking information to the server.
In addition, when the user does not wear the head-mounted equipment, the head motion picture of the user is shot through the camera device, then the head motion picture is transmitted to the virtual reality host, and the virtual reality host identifies the head motion picture to generate head tracking information; the virtual reality host transmits the head tracking information to the server.
(3) Tracking information of a user's control handle;
specifically, the tracking information of the control handle of the user refers to that when the user uses the virtual reality device and realizes virtual interaction by tracking the movement of the control handle, the spatial positioning information of the control handle is the tracking information of the control handle.
Specifically, the spatial positioning information of the control handle may be obtained by: the camera device (when the virtual reality host is the mobile terminal, the camera device can be a rear camera on the mobile terminal) photographs the space where the control handle is located to form a space picture of the control handle; an inertial sensor mounted on the control handle detects 3DOF data of the control handle; the camera device sends the space picture where the control handle is located to the virtual reality host computer, the inertial sensor also transmits the 3DOF data of the control handle to the virtual reality host computer, the virtual reality host computer calculates the space picture and the 3DOF data, tracking information of the control handle is generated, and then the tracking information of the control handle is transmitted to the server.
When the user motion tracking information includes one or more of the above (1), (2), and (3), as shown in fig. 9, the storage unit 205 includes:
the first storage module 2051, the first storage module 2051 is in communication connection with the virtual reality host 104, and the first storage module 2051 is configured to receive eyeball tracking information in a tracking information group transmitted by the virtual reality host 104 and store the eyeball tracking information; and
the second storage module 2052 is in communication connection with the virtual reality host 104 of 2052, and the second storage module 2052 is configured to receive the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host 104 and store the gesture tracking information of the user; and/or
A third storage module 2053, which is in communication connection with the virtual reality host and is configured to receive the head tracking information of the user in the tracking information group transmitted by the virtual reality host and store the head tracking information of the user; and/or
And a fourth storage module 2054, which is in communication connection with the virtual reality host and is used for receiving the tracking information of the control handle in the tracking information group transmitted by the virtual reality host and storing the tracking information of the control handle.
The rendering unit 201 includes:
the first rendering module 2011 is in communication connection with the first storage module 2051, and the first rendering 2011 is configured to receive the eyeball tracking information transmitted by the first storage module 2051, render the eyeball tracking information in the eyeball tracking information group, and generate first rendering information; and/or
The second rendering module 2012 is in communication connection with the second storage module 2052, and the second rendering module 2012 is configured to receive the gesture tracking information of the user transmitted by the second storage module 2052, and render the gesture tracking information of the user to generate second rendering information; and/or
The third rendering module 2013, the third rendering module 2013 is in communication connection with the third storage module 2053, and the third rendering module 2013 is configured to receive the head tracking information of the user transmitted by the third storage module 2053, and render the head tracking information of the user to generate third rendering information;
the fourth rendering module 2014 is in communication connection with the fourth storage module 2054, and the fourth rendering module 2014 is used for receiving the tracking information of the control handle transmitted by the fourth storage module 2054, rendering the tracking information of the control handle and generating fourth rendering information; and
an integration module 2015, the integration module 2015 being respectively in communication connection with the calibration unit 203, the first rendering module 2011, the second rendering module 2012 and/or the third rendering module 2013 and/or the fourth rendering module 2014, the integration module being configured to integrate the first rendering information, the second rendering information and/or the third rendering information and/or the fourth rendering information and/or the fifth rendering information to generate the screen information.
In this case, as shown in fig. 10, step S13 (the storage unit 205 receives the tracking information group transmitted by the virtual reality host and stores the tracking information group) includes:
step S131: the first storage module 2051 receives the eyeball tracking information in the tracking information group transmitted by the virtual reality host 104, stores the eyeball tracking information, and transmits the eyeball tracking information to the first rendering module 2011; and
step S132: the second storage module 2052 receives the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host 104, stores the gesture tracking information of the user, and transmits the gesture tracking information to the second rendering module 2012; and/or
Step S133: the third storage module 2053 receives the head tracking information of the user in the tracking information group transmitted by the virtual reality host 104, stores the head tracking information of the user, and transmits the head tracking information to the third rendering module 2013; and/or
Step S134: the fourth storage module 2054 receives the tracking information of the joystick in the tracking information group transmitted by the virtual reality host 104, stores the tracking information of the joystick, and transmits the tracking information of the joystick to the fourth rendering module 2014.
Step S110 (the rendering unit renders the eyeball tracking information and the user motion tracking information in the tracking information group transmitted by the virtual reality host, generates the screen information, and transmits the screen information to the calibration unit) specifically includes the following steps:
step S1101: the first rendering module 2011 receives the eyeball tracking information in the tracking information group transmitted by the first storage module 2051, renders the eyeball tracking information, generates first rendering information, and transmits the first rendering information to the integrating module 2015;
step S1102: the second rendering module 2012 receives the gesture tracking information in the tracking information group transmitted by the second storage module 2052, renders the gesture tracking information, generates second rendering information, and transmits the second rendering information to the integrating module 2015; and/or
Step S1103: the third rendering module 2013 receives the head trace information in the trace information group transmitted by the third storage module 2053, renders the head trace information, generates third rendering information, and transmits the third rendering information to the integrating module 2015;
step S1104: the fourth rendering module 2014 receives the tracking information of the joystick of the tracking information group transmitted by the fourth storage module 2054, renders the tracking information of the joystick, generates fourth rendering information, and transmits the fourth rendering information to the integrating module 2015;
step S1105: the integration module 2015 integrates the first rendering information, the second rendering information, and/or the third rendering information and/or the fourth rendering information to generate screen information, and transmits the screen information to the calibration unit 203.
When the storage unit comprises a plurality of storage modules, the rendering unit comprises a plurality of rendering modules, one storage module corresponds to one rendering module, the storage module transmits the tracing information of one action of eyeball tracing information or user action tracing information transmitted by the virtual reality host to the corresponding rendering module, the rendering module infects the tracing information, and then the integration module integrates the plurality of rendering information to generate picture information. Different action tracking information is respectively and correspondingly stored and rendered, and rendering efficiency is improved.
Alternatively to this, the first and second parts may,
as a third aspect of the present invention, the present invention also provides an electronic device, as shown in fig. 11, an electronic device 900 includes one or more processors 901 and a memory 902.
The processor 901 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or information execution capabilities, and may control other components in the electronic device 900 to perform desired functions.
Memory 901 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program information may be stored on the computer readable storage medium and executed by the processor 901 to implement the virtual reality system-based distributed rendering method of the various embodiments of the present invention described above or other desired functions.
In one example, the electronic device 900 may further include: an input device 903 and an output device 904, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 903 may include, for example, a keyboard, a mouse, and the like.
The output device 904 can output various information to the outside. The output 904 may include, for example, a display, a communication network, a remote output device connected thereto, and so forth.
Of course, for simplicity, only some of the components of the electronic device 900 relevant to the present invention are shown in fig. 11, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 900 may include any other suitable components depending on the particular application.
In addition to the above methods and apparatus, embodiments of the invention may also be a computer program product comprising computer program information which, when executed by a processor, causes the processor to perform the steps in the virtual reality system based distributed rendering method according to various embodiments of the invention described in this specification.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present invention may also be a computer-readable storage medium having stored thereon computer program information, which, when executed by a processor, causes the processor to perform the steps in the virtual reality system-based distributed rendering method of the present specification according to various embodiments of the present invention.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: a communication connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present invention have been described above with reference to specific embodiments, but it should be noted that the advantages, effects, etc. mentioned in the present invention are only examples and are not limiting, and the advantages, effects, etc. must not be considered to be possessed by various embodiments of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the invention is not limited to the specific details described above.
The block diagrams of devices, apparatuses, systems involved in the present invention are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the apparatus, devices and methods of the present invention, the components or steps may be broken down and/or re-combined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the invention, so that any modifications, equivalents and the like included in the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (15)

1. A virtual reality system with distributed rendering functionality, comprising:
at least one virtual reality device and a server;
wherein, the virtual reality equipment includes:
a head-mounted device having a display screen; an eyeball tracker for tracking eyeball tracking information of a user;
detecting means for tracking user motion tracking information of the user; and
the virtual reality host is in communication connection with the display screen, the eyeball tracker and the detection device respectively, and is used for acquiring a tracking information group of a user, and the tracking information group comprises a host number, eyeball tracking information and user action tracking information; and
the server is in communication connection with at least one virtual reality host, and is used for receiving eyeball tracking information and user action tracking information in a tracking information group transmitted by the virtual reality host to render, generating picture information and transmitting the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group;
and the virtual reality host is also used for controlling the display screen to display the picture information according to the picture information.
2. The virtual reality system of claim 1, wherein the server comprises:
the rendering unit is in communication connection with the virtual reality host, and is used for receiving the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host, rendering the eyeball tracking information and the user action tracking information and generating picture information; and
the data transmission unit is in communication connection with the rendering unit, receives the picture information transmitted by the rendering unit and the host number corresponding to the picture information, and sends the picture information to the virtual reality host corresponding to the host number according to the host number.
3. The virtual reality system of claim 2, wherein the server further comprises:
the calibration unit is in communication connection with the rendering unit and is used for receiving the picture information transmitted by the rendering unit and the host number corresponding to the picture information, calibrating the picture information and generating calibration picture information and a host number corresponding to the calibration picture information;
the virtual reality host is used for controlling the display screen to display the calibration picture information according to the calibration picture information.
4. The virtual reality system of claim 2, wherein the server further comprises:
the storage unit is respectively in communication connection with the rendering unit and the virtual reality host, and is used for receiving the tracking information group transmitted by the virtual reality host, storing the tracking information group and sending the tracking information group to the rendering unit.
5. The virtual reality system of claim 4, wherein the user motion tracking information comprises:
gesture tracking information of the user; and/or
Head tracking information of the user; and/or
Tracking information of the user's control handle.
6. The virtual reality system of claim 5,
the memory cell includes:
the first storage module is in communication connection with the virtual reality host, and is used for receiving eyeball tracking information in the tracking information group transmitted by the virtual reality host and storing the eyeball tracking information; and
the second storage module is in communication connection with the virtual reality host, and is used for receiving the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host and storing the gesture tracking information of the user; and/or
The third storage module is in communication connection with the virtual reality host, and is used for receiving the head tracking information of the user in the tracking information group transmitted by the virtual reality host and storing the head tracking information of the user; and/or
And the fourth storage module is in communication connection with the virtual reality host, and is used for receiving the tracking information of the control handle in the tracking information group transmitted by the virtual reality host and storing the tracking information of the control handle.
7. The virtual reality system of claim 6,
the rendering unit includes:
the first rendering module is in communication connection with the first storage module, and is configured to receive the eyeball tracking information transmitted by the first storage module, render the eyeball tracking information in the eyeball tracking information group, and generate first rendering information; and/or
The second rendering module is in communication connection with the second storage module and is used for receiving the gesture tracking information of the user transmitted by the second storage module, rendering the gesture tracking information of the user and generating second rendering information; and/or
The third rendering module is in communication connection with the third storage module, and is configured to receive the head tracking information of the user transmitted by the third storage module, render the head tracking information of the user, and generate third rendering information;
the fourth rendering module is in communication connection with the fourth storage module, and is used for receiving the tracking information of the control handle transmitted by the fourth storage module, rendering the tracking information of the control handle and generating fourth rendering information; and
the integration module is in communication connection with the calibration unit, the first rendering module, the second rendering module and/or the third rendering module and/or the fourth rendering module respectively, and is used for integrating the first rendering information, the second rendering information and/or the third rendering information and/or the fourth rendering information to generate picture information.
8. The virtual reality system of claim 1,
the server is computer equipment; or
The server is a cloud server.
9. A distributed rendering method based on the virtual reality system of claim 1, comprising:
the virtual reality host acquires a tracking information group of a user, wherein the tracking information group comprises a host number, eyeball tracking information and user action tracking information;
the server renders the eyeball tracking information and the user action tracking information in a tracking information group transmitted by the virtual reality host to generate picture information, and transmits the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group; and
and the virtual reality host controls the display screen to display the picture information according to the picture information.
10. The virtual reality system-based distributed rendering method of claim 9, wherein the server comprises a rendering unit and a data transmission unit;
the method for processing the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host computer by the server comprises the following steps of rendering the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host computer by the server, generating picture information, and transmitting the picture information to the virtual reality host computer corresponding to the host computer number according to the host computer number in the tracking information group, wherein the method comprises the following steps:
the rendering unit renders the eyeball tracking information and the user action tracking information in a tracking information group transmitted by the virtual reality host to generate picture information;
and the data transmission unit transmits the picture information to the virtual reality host corresponding to the host number according to the host number in the tracking information group.
11. The virtual reality system-based distributed rendering method of claim 10, wherein the server further comprises a calibration unit; wherein the distributed rendering method further comprises:
the calibration unit receives the picture information sent by the rendering unit, calibrates the picture information and generates calibration picture information;
wherein, the data transmission unit transmits the image information to the virtual reality host corresponding to the host number according to the host number in the tracking information group, and the method comprises the following steps:
the data transmission unit transmits the calibration picture information to a virtual reality host corresponding to the host number according to the host number in the tracking information group;
the virtual reality host controls the display screen to display the picture information according to the picture information, and the method comprises the following steps:
and the virtual reality host controls the display screen to display the calibration picture information according to the calibration picture information.
12. The virtual reality system-based distributed rendering method of claim 10, wherein the server further comprises a storage unit, the distributed rendering method further comprising:
and the storage unit receives the tracking information group transmitted by the virtual reality host and stores the tracking information group.
13. The virtual reality system-based distributed rendering method of claim 12, wherein the user action tracking information comprises:
gesture tracking information of the user; and/or
Head tracking information of the user; and/or
Tracking information of the user's control handle.
14. The virtual reality system-based distributed rendering method of claim 13, wherein the storage unit comprises a first storage module, and/or a second storage module, and/or a third storage module, and/or a fourth storage module;
the storage unit receives the tracking information group transmitted by the virtual reality host, and stores the tracking information group, and the method comprises the following steps:
the first storage module receives eyeball tracking information in the tracking information group transmitted by the virtual reality host and stores the eyeball tracking information; and
the second storage module receives the gesture tracking information of the user in the tracking information group transmitted by the virtual reality host and stores the gesture tracking information of the user; and/or
The third storage module receives the head tracking information of the user in the tracking information group transmitted by the virtual reality host, and stores the head tracking information of the user; and/or
And the fourth storage module receives the tracking information of the control handle in the tracking information group transmitted by the virtual reality host, and stores the tracking information of the control handle.
15. The virtual reality system-based distributed rendering method of claim 14, wherein the rendering unit comprises a first rendering module, and/or a second rendering module, and/or a third rendering module, and/or a fourth rendering module, and an integration module;
the rendering unit renders the eyeball tracking information and the user action tracking information in the tracking information group transmitted by the virtual reality host to generate picture information, and the rendering unit comprises:
the first rendering module receives the eyeball tracking information in the tracking information group transmitted by the first storage module, renders the eyeball tracking information and generates first rendering information;
the second rendering module receives the gesture tracking information in the tracking information group transmitted by the second storage module, renders the gesture tracking information and generates second rendering information; and/or
The third rendering module receives the head tracking information in the tracking information group transmitted by the third storage module, and renders the head tracking information to generate third rendering information;
the fourth rendering module receives the tracking information of the control handle of the tracking information group transmitted by the fourth storage module, renders the tracking information of the control handle and generates fourth rendering information; and
and the integration module integrates the first rendering information, the second rendering information and/or the third rendering information and/or the fourth rendering information to generate picture information.
CN202211212004.7A 2022-09-30 2022-09-30 Virtual reality system with distributed rendering function and distributed rendering method Pending CN115543084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211212004.7A CN115543084A (en) 2022-09-30 2022-09-30 Virtual reality system with distributed rendering function and distributed rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211212004.7A CN115543084A (en) 2022-09-30 2022-09-30 Virtual reality system with distributed rendering function and distributed rendering method

Publications (1)

Publication Number Publication Date
CN115543084A true CN115543084A (en) 2022-12-30

Family

ID=84732057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211212004.7A Pending CN115543084A (en) 2022-09-30 2022-09-30 Virtual reality system with distributed rendering function and distributed rendering method

Country Status (1)

Country Link
CN (1) CN115543084A (en)

Similar Documents

Publication Publication Date Title
US10394314B2 (en) Dynamic adjustment of user interface
KR101890459B1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
CN111949111B (en) Interaction control method and device, electronic equipment and storage medium
US20150138086A1 (en) Calibrating control device for use with spatial operating system
JP2017531227A (en) Interface providing method and apparatus for recognizing operation in consideration of user's viewpoint
US11449196B2 (en) Menu processing method, device and storage medium in virtual scene
US20210041942A1 (en) Sensing and control method based on virtual reality, smart terminal, and device having storage function
US20150355811A1 (en) Methods and systems for controlling a virtual interactive surface and interactive display systems
US11099630B2 (en) Drift cancelation for portable object detection and tracking
CN114706489A (en) Virtual method, device, equipment and storage medium of input equipment
CN112465971B (en) Method and device for guiding point positions in model, storage medium and electronic equipment
CN113689508A (en) Point cloud marking method and device, storage medium and electronic equipment
CN104704449A (en) User interface device and user interface method
CN115543084A (en) Virtual reality system with distributed rendering function and distributed rendering method
CN115512046B (en) Panorama display method and device for points outside model, equipment and medium
CN111429519B (en) Three-dimensional scene display method and device, readable storage medium and electronic equipment
CN113438463A (en) Method and device for simulating orthogonal camera image, storage medium and electronic equipment
US20240033614A1 (en) Positioning method and locator of combined controller, controller handle and virtual system
CN115167685A (en) Interaction method and controller for sharing virtual scene, handheld device and virtual system
CN115463409A (en) Frame supplementing method for picture information of handheld device and handheld device
US20240036637A1 (en) Spatial positioning method of separate virtual system, separate virtual system
JP7480408B1 (en) Information processing system, information processing device, program, and information processing method
CN113112613B (en) Model display method and device, electronic equipment and storage medium
US20240168566A1 (en) Finger Orientation Touch Detection
CN115253275A (en) Intelligent terminal, palm machine, virtual system and space positioning method of intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination