CN114527872A - Virtual reality interaction system, method and computer storage medium - Google Patents

Virtual reality interaction system, method and computer storage medium Download PDF

Info

Publication number
CN114527872A
CN114527872A CN202210083807.0A CN202210083807A CN114527872A CN 114527872 A CN114527872 A CN 114527872A CN 202210083807 A CN202210083807 A CN 202210083807A CN 114527872 A CN114527872 A CN 114527872A
Authority
CN
China
Prior art keywords
virtual scene
capture data
server
data acquisition
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210083807.0A
Other languages
Chinese (zh)
Other versions
CN114527872B (en
Inventor
崔永太
谢冰
肖乐天
陈明洋
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202210083807.0A priority Critical patent/CN114527872B/en
Publication of CN114527872A publication Critical patent/CN114527872A/en
Application granted granted Critical
Publication of CN114527872B publication Critical patent/CN114527872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual reality interaction system, a virtual reality interaction method and a server are provided. The method comprises the following steps: the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system and an operation command from each virtual scene client; the dynamic capturing data acquisition systems comprise at least two virtual scene clients, and each dynamic capturing data acquisition system at least corresponds to one local virtual scene client; the virtual scene server responds to the operation command according to the received dynamic capture data and synchronizes a response result to each virtual scene client; and the virtual scene client can adjust the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data from other dynamic capture data acquisition systems transmitted by the local dynamic capture data acquisition system, and display the adjusted virtual scene to the user. The invention can realize the interaction of different users in different scenes in the same virtual scene.

Description

Virtual reality interaction system, method and computer storage medium
Technical Field
The invention belongs to the technical field of virtual reality interaction, and particularly relates to a virtual interaction system and method for a remote scene and a computer storage medium.
Background
Currently, the flow of virtual reality interaction is generally: and acquiring the motion capture data (three-dimensional space position) of the user, and transmitting the motion capture data to a server of the virtual scene. And the server determines the position information of the user in the virtual scene according to the moving capture data so as to carry out corresponding interactive response and synchronously display the response result to the user. In the virtual reality interaction process, the acquisition mode of the dynamic capture data can be various, such as inertial dynamic capture, laser dynamic capture or optical dynamic capture.
In the virtual reality interaction based on the optical motion capture technology, a plurality of motion capture cameras in the optical motion capture system can be used for identifying optical mark points attached to an observed object, coordinate position information (motion capture data) of the mark points is processed and calculated by an image acquisition system of the motion capture cameras, and then the coordinate position information is transmitted to a server of the motion capture cameras through a network (wired, wireless, USB and the like). The camera server receives coordinate position information from the automatic camera capturing, identifies an observed object according to the position coordinate information, acquires position information of a user in a physical scene, and then sends the position information in the physical scene to the server of the virtual scene and the client. The server of the virtual scene maps the position information into the virtual scene, so that the position information of the user in the virtual scene is determined and displayed to the user through the client of the virtual scene.
Currently, in the virtual reality interaction process, as shown in fig. 3, the trend of the dynamic capture data is specifically as follows: the virtual scene server 31 and the virtual scene client 32 both acquire corresponding dynamic capture data from the optical dynamic capture system 33, and since the communication modes and synchronization modes of the virtual scene server 31, the virtual scene client 32 and the optical dynamic capture system 33 are developed based on the local area network, the communication mode of the current system can realize virtual reality interaction in the same physical space.
With the further application of the virtual reality interaction technology, there is a need for users at different places to realize virtual reality interaction in the same virtual scene, but at present, no good solution exists.
Disclosure of Invention
In view of this, the present invention provides a synchronous virtual reality interaction system, which can implement interaction of different users in different scenes in the same virtual scene.
A first aspect of an embodiment of the present invention provides a virtual reality interaction system, where the system includes: at least two interactive subsystems and a virtual scene server; the virtual scene server operates in a wide area network; the interaction subsystem comprises: the system comprises a dynamic capture data acquisition system and at least one virtual scene client;
the dynamic capture data acquisition system is used for acquiring local dynamic capture data and sending the dynamic capture data to the local dynamic capture data acquisition systems in the virtual scene client, the virtual scene server and other interactive subsystems;
the virtual scene client is used for receiving an operation command of a local corresponding user and transmitting the operation command to the virtual scene server; receiving the dynamic capture data sent by the local dynamic capture data acquisition system and the dynamic capture data from other dynamic capture data acquisition systems sent by the local dynamic capture data acquisition system;
the virtual scene server is used for carrying out corresponding response according to the received operation commands transmitted by all the virtual scene clients and the moving capture data transmitted by all the moving capture data acquisition systems, and synchronizing the response result to each virtual scene client;
and the virtual scene client is used for adjusting the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data from other dynamic capture data acquisition systems transmitted by the local dynamic capture data acquisition system, and displaying the adjusted virtual scene to the user.
Wherein the kinetic capture data acquisition system is also used for establishing P2P communication with the kinetic capture data acquisition system in other interactive subsystems.
Wherein, the dynamic capture data acquisition system is an optical dynamic capture acquisition system, and comprises: a plurality of motion capture cameras and camera servers;
the mobile capturing camera is used for acquiring mobile capturing data of a local target object and transmitting the data to the camera server;
the camera server is specifically configured to establish P2P communication with camera servers in other motion capture data acquisition systems, synchronize the motion capture data to a local virtual scene client, and upload the motion capture data to the virtual scene server and camera servers in other motion capture data acquisition systems.
When the camera server establishes P2P communication with camera servers in other interactive subsystems, the camera server is specifically configured to:
sending a link request to the virtual scene server; the link request carries IP information of the camera server; so that the virtual scene server synchronizes the received IP information of all the camera servers to each camera server in the network;
the camera server is further used for receiving the IP information of all the camera servers transmitted by the virtual scene server and establishing P2P communication with other camera servers according to the IP information.
Wherein the kinetic capture data comprises: rigid body name, rigid body data, and rigid body identification number.
The dynamic capture data comprises dynamic capture data at a plurality of moments, and the dynamic capture data acquisition system is specifically used for:
and sending the motion capture data of part of the moments to the virtual scene server and motion capture data acquisition systems in other interactive subsystems according to a preset time interval.
The virtual scene server is specifically used for determining position information of a user in a virtual scene according to the motion capture data received at the current moment, and taking the position information as end point information; and using the position information of the user recorded in the virtual scene server as starting point information; performing linear interpolation processing according to the starting point position, the end point position, the obtained interpolation time interval and the preset time interval to simulate other position information of the user between the starting point position and the end point position so as to perform corresponding response;
the camera server is specifically used for determining position information of a user in a virtual scene according to the motion capture data received at the current moment, and taking the position information as end point information; and using the current position information of the user recorded in the virtual scene server as starting point information; and performing linear interpolation processing according to the starting point position, the end point position, the obtained interpolation time interval and the preset time interval to simulate other position information of the user between the starting point position and the end point position, and synchronizing the position information to a local virtual scene client.
A second aspect of an embodiment of the present invention provides a virtual reality interaction method, including:
the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system and an operation command from each virtual scene client; the dynamic capturing data acquisition systems comprise at least two virtual scene clients, and each dynamic capturing data acquisition system at least corresponds to one local virtual scene client;
the virtual scene server responds to the operation command according to the received dynamic capture data and synchronizes a response result to each virtual scene client; and the virtual scene client can adjust the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data from other dynamic capture data acquisition systems transmitted by the local dynamic capture data acquisition system, and display the adjusted virtual scene to the user.
Before the virtual scene server receives the kinetic capture data sent by each kinetic capture data acquisition system, the method further comprises the following steps:
the virtual scene server establishes P2P communication between the kinetic capture data acquisition systems.
Wherein, the dynamic capture data acquisition system is an optical dynamic capture acquisition system, and comprises: a plurality of motion capture cameras and camera servers; the virtual scene server receives the dynamic capture data from each dynamic capture data acquisition system, and the method specifically comprises the following steps:
the virtual scene server receives the motion capture data from the camera server; the motion capture data is motion capture data of a local target object acquired by the motion capture camera.
The virtual scene server establishes P2P communication between the kinetic capture data acquisition systems, and specifically includes:
the virtual scene server receives a link request sent by each camera server;
the virtual scene server extracts the IP information of the camera server from the link request;
the virtual scene server synchronizes the extracted IP information of all the camera servers to each of the camera servers in the network so that each of the camera servers can establish P2P communication with the other camera servers according to the received IP information.
Wherein the kinetic capture data comprises: rigid body name, rigid body data, and rigid body identification number.
A third aspect of the embodiments of the present invention provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the above virtual reality interaction methods when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the virtual reality interaction method described in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
according to the technical scheme provided by the invention, after the virtual scene client receives the operation commands of the user, the operation commands are all uploaded to the virtual scene server. The virtual scene server is used as a control center, and is used for correspondingly transmitting the operation commands of the users and transmitting the response result to each virtual scene client according to the received operation commands of all the users and the position information (motion capture data) of all the users. And each virtual scene client renders the corresponding virtual scene according to the received response result and the view angle information of each user and the user corresponding to the client, and displays the rendered virtual scene to the user, so that the virtual reality interaction of multiple users of the remote scene in the same virtual scene is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a virtual reality interaction system according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of a virtual reality interaction system according to a second embodiment of the present invention;
FIG. 3 is a schematic flow diagram of kinetic capture data in a virtual reality interaction in the prior art;
FIG. 4 is a flowchart illustrating a virtual reality interaction method according to a first embodiment of the present invention;
FIG. 5 is a flowchart illustrating a virtual reality interaction method according to a second embodiment of the present invention;
fig. 6 is a schematic block diagram of an embodiment of a server provided by the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to illustrate the technical means of the present invention, the following description is given by way of specific examples.
The virtual reality interaction scheme of the embodiment of the invention is suitable for virtual reality interaction of different scenes, namely users in different physical areas can realize interaction in the same virtual scene. Users of different physical regions are also referred to herein as: users in different places, users in different scenes, or users under different action systems. Hereinafter, the detailed description will be given by way of specific examples.
The virtual reality interaction system provided by the invention is used for realizing the interaction of users positioned in different physical areas in the same virtual scene, and comprises the following steps: at least two interactive subsystems, and a virtual scene server. Each interactive subsystem comprises: the system comprises a dynamic capture data acquisition system and at least one virtual scene client. In the following, the interactive system 100 includes two interactive subsystems, each of which includes a virtual scene client. One virtual scene client corresponds to one user, and the virtual scene client can receive an operation command input by the user corresponding to the virtual scene client.
As shown in fig. 1, which is a block diagram of a virtual reality interaction system according to a first embodiment of the present invention, the interaction system 100 is used for implementing interaction of users located in different physical areas in a same virtual scene, and includes: a kinetic capture data acquisition system 1011 located in the first zone 11, and a virtual scene client 1012 located in the same zone as the kinetic capture data acquisition system 1011. A motion capture data acquisition system 1021 located in the second area 12, and a virtual scene client 1022 located in the same area as the motion capture data acquisition system 1021, and further comprising a virtual scene server 103 operating on a wide area network. Wherein the first area 11 and the second area 12 are different physical areas.
The dynamic capture data acquisition system 1011 acquires first dynamic capture data of a local target object (the target object includes a user or other objects such as a game gun, and the target object is specifically described as a user in the following description), synchronizes the first dynamic capture data to the virtual scene client 1012, and simultaneously, the dynamic capture data acquisition system 1011 further transmits the acquired first dynamic capture data to the dynamic capture data acquisition system 1021 and the virtual scene server 103. Similarly, the mobile capture data collection system 1021 collects second mobile capture data of a local target object (the target object includes a user or other objects such as a game gun, etc., and the target object is specifically described as a user in the following description), and synchronizes the second mobile capture data to the client 1022, and meanwhile, the mobile capture data collection system 1021 also transmits the collected second mobile capture data to the mobile capture data collection system 1011 and the virtual scene server 103.
The dynamic capture data specifically includes: rigid body name, rigid body data and rigid body identification number. The terminal equipment receiving the dynamic capturing data can identify the rigid body according to the rigid body name and the rigid body identification number, determine the user to which the rigid body belongs, and simultaneously determine the position information of the user according to the rigid body data.
After receiving the first kinetic capture data acquired by the kinetic capture data acquisition system 1011 and the second kinetic capture data acquired by the kinetic capture data acquisition system 1021, the virtual scene server 103 may acquire the position information of all users in the virtual scene.
For the virtual scene client 1012, it receives the first kinetic capture data from the local kinetic capture data acquisition system 1011 on one hand, and can also receive the second kinetic capture data from the automatic capture data acquisition system 1021 from the local kinetic capture data acquisition system 1011 at the same time. That is, the virtual scene client 1012 may know location information of all users in different physical areas. For the virtual scene client 1022, on one hand, the second kinetic capture data transmitted from the local kinetic capture data acquisition system 1021 is received, and on the other hand, the first kinetic capture data transmitted from the local kinetic capture data acquisition system 1021 to the automatic capture data acquisition system 1011 can also be received. That is, the virtual scene client 1022 may know location information of all users who are in different physical areas. That is, even if the users are in different physical areas, each client in the virtual scene can know the location information of all users in the virtual scene.
Therefore, the purpose that the dynamic capture data acquisition system synchronizes the acquired dynamic capture data of the user to the local client, the dynamic capture data acquisition systems in other areas and the virtual scene server is as follows: the dynamic capture data can be shared, so that dynamic capture data of all users in a virtual scene can be shared among different dynamic capture data acquisition systems, the dynamic capture data acquisition systems and the virtual scene server, and the data sharing effect similar to that in the same local area network is achieved. By sharing the motion capture data, the position information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the virtual reality interaction at different places is ensured, and the immersion sense of the virtual reality interaction is realized. The dynamic capturing data acquisition system synchronously acquires dynamic capturing data to the local virtual scene client, other dynamic capturing data acquisition systems and the virtual scene server in a parallel mode, so that the synchronization time can be shortened, the data sharing efficiency is improved, the interaction delay is further shortened, and the interaction experience is improved.
When the dynamic capture data acquisition system 1011 and the dynamic capture data acquisition system 1021 are in communication, a P2P communication mode can be selected. When establishing P2P communication between the kinetic capture data acquisition system 1011 and the kinetic capture data acquisition system 1021, the kinetic capture data acquisition system 1011 and the kinetic capture data acquisition system 1021 actively send a link request to the virtual scene server 103, respectively, and the link request carries its own IP information. After receiving the link request, the virtual scene server 103 extracts the IP information therein and synchronizes all the extracted IP information to the currently online motion capture data acquisition system (1011, 1021). After all IP information is received by each kinetic capture data acquisition system, P2P communication connection can be established between each kinetic capture data acquisition system and other kinetic capture data acquisition systems.
It is understood that the kinetic capture data acquisition system of embodiments of the present invention may be an inertial kinetic capture data acquisition system, a laser kinetic capture data acquisition system, an optical kinetic capture data acquisition system, or other types of kinetic capture data acquisition systems.
And after the dynamic capture data acquisition system acquires and synchronizes the dynamic capture data, entering the interactive process of the next stage.
The virtual scene client 1012 may also receive an operation command input by a local corresponding user (i.e., a user corresponding to the virtual scene client 1012) and forward the operation command to the virtual scene server 103. Likewise, the virtual scene client 1022 may also receive an operation command input by a local corresponding user (a user corresponding to the virtual scene client 1022), and forward the operation command to the virtual scene server 103. The operation command is an operation instruction of a user on a person or an object in the virtual scene.
Specifically, the user may input an operation command to the virtual scene client by means of a handle or an inertial gesture. After receiving the operation command, each virtual scene client converts the operation command into a form that can be recognized by the virtual farm server 103 and transmits the converted operation command to the virtual scene server 103. That is, the virtual scene server 103 can know operation commands of all users in different physical areas in the virtual scene.
The main roles of the virtual scene server 103 are: controlling the normal operation of the interaction logic. In order to realize normal interaction of users in different physical areas in the same virtual scene, the virtual scene server 103 needs to acquire position information of all users in the virtual scene and operation commands of all users. In view of the fact that the two conditions are already fulfilled in the foregoing description, the virtual scene server 103 may respond accordingly according to the received operation commands of all users and the position information of all users in the virtual scene. And synchronizes the response results to each virtual scene client, such as virtual scene client 1012 and virtual scene client 1022.
After synchronizing the response results to the virtual scene client 1012 and the virtual scene client 1022, for each virtual scene client that receives the response results, it needs to perform adjustment of the corresponding virtual scene according to the response results. The specific adjustment mode is as follows: the virtual scene client adjusts the virtual scene according to the response result, the position information (namely the first dynamic capture data and the second dynamic capture data) of all users in the virtual scene, and the visual angle information of the local user (the user corresponding to the client), and displays the adjusted virtual scene to the user. For example, the adjusted virtual scene may be displayed to the user through a helmet worn by the user. Therefore, the interaction of users in different physical areas in the same virtual scene is completed.
It is understood that the dynamic capture data acquisition system is of a variety of types, such as laser, inertial, or optical, and in the following embodiments, the dynamic capture data acquisition system will be described in detail as an example of an optical dynamic capture data acquisition system.
Fig. 2 is a block diagram of a virtual reality interaction system according to a second embodiment of the present invention, where the difference between the embodiment of the present invention and the embodiment shown in fig. 1 is: the embodiment of the invention embodies the structure of the dynamic capture data acquisition system. The dynamic capture data acquisition system of the embodiment of the invention is specifically an optical dynamic capture acquisition system, and each optical dynamic capture acquisition system comprises: a camera server and a plurality of motion capture cameras. Hereinafter, the description will be specifically made.
As shown in fig. 2, the dynamic capture data acquisition system 1011 specifically includes: a plurality of motion capture cameras 1011a and a camera server 1011 b. Similarly, the dynamic capture data acquisition system 1021 specifically includes: a plurality of motion capture cameras 1021a and a camera server 1021 b.
The dynamic capture camera is used for acquiring dynamic capture data of a local user and transmitting the dynamic capture data to a corresponding camera server. Specifically, the plurality of motion capture cameras 1011a are configured to capture first motion capture data of a local user and transmit the first motion capture data to the camera server 1011 b. And a plurality of motion capture cameras 1021a for acquiring second motion capture data of the local user and transmitting the second motion capture data to the camera server 1021 b.
The camera server has the following functions: establishing P2P communication with camera servers of other motion capture data acquisition systems, sharing local motion capture data to local virtual scene clients, virtual scene servers and camera servers in other motion capture data acquisition systems. Specifically, the camera server 1011b is configured to establish P2P communication with the camera server 1021b, and also transmit the first motion capture data acquired by the motion capture camera 1011a to the local virtual scene client 1012, the virtual scene server 103, and the camera server 1021 b. Similarly, the camera server 1021b is configured to establish P2P communication with the camera server 1011b, and also transmit the second motion capture data acquired by the motion capture camera 1021a to the local virtual scene client 1022, the virtual scene server 103, and the camera server 1011 b.
Note that, the method of establishing P2P communication between the camera server 1021b and the camera server 1011b may be:
the camera server 1021b sends a link request to the virtual scene server 103, where the link request carries IP information of the camera server 1021 b; the camera server 1011b also sends a link request to the virtual scene server 103, where the link request carries the IP information of the camera server 1011 b. The virtual scene server 103 synchronizes the received IP information of the camera server 1021b and the IP information of the camera server 1011b to the online camera servers (i.e., the camera server 1021b and the camera server 1011b) in the network.
For the camera server 1021b, after receiving the IP information of the camera server 1021b and the IP information of the camera server 1011b, a connection request is initiated to the camera server 1011b in accordance with the IP information of the camera server 1011b to establish P2P communication. Likewise, for the camera server 1011b, after receiving the IP information of the camera server 1021b and the IP information of the camera server 1011b, it initiates a connection request to the camera server 1021b according to the IP information of the camera server 1021b to establish P2P communication.
After P2P communication is established between the camera servers, the motion capture data collected by the motion capture cameras (1011a, 1021a) can be shared between the camera servers, so that the camera server 1021b can acquire the motion capture data of the camera server 1011b and synchronize to the local virtual scene client 1022 when the interaction is required; similarly, the camera server 1011b may learn the motion capture data of the camera server 1021b and synchronize to the local virtual scene client 1012 when the interaction is needed. Therefore, all the virtual scene clients in different physical areas can acquire the moving capture data of all the users in the virtual scene interaction.
The virtual reality interaction system of the embodiment of the invention adopts a mode of combining a plurality of moving-capture cameras and the camera server to acquire the moving-capture data of a user, and establishes P2P communication between the camera servers by taking the virtual scene server as a relay so as to realize the sharing of the moving-capture data, thereby ensuring that the transmission of the moving-capture data is not interfered by an external network. Meanwhile, after the virtual scene client receives the operation command of the user, the operation command is uploaded to the virtual scene server. The virtual scene server is used as an interactive control center, responds to the operation commands of the users according to the received operation commands of all the users and the position information (motion capture data) of all the users, and sends the response result to each virtual scene client. And each virtual scene client renders the corresponding virtual scene according to the received response result, the position information of each user and the view angle information of the user corresponding to the client, and displays the rendered virtual scene to the user, so that the virtual reality interaction of multiple users in different scenes in the same virtual scene is realized.
It is understood that, during the virtual reality interaction process, the user is moving in real time, and the motion capture data collected by the motion capture data collection system is continuous, that is, the motion capture data includes motion capture data at a plurality of time instants. After the dynamic capture data acquisition system acquires dynamic capture data, the acquired dynamic capture data at multiple moments are generally required to be synchronized to ensure the integrity of the dynamic capture data.
The exception is that if the data volume of the dynamic capture data acquired by the dynamic capture data is large, the data acquired by all the shared data can bring serious load to the network bandwidth, and when the network environment is not good, response delay can be caused, so that the real-time interaction effect of virtual reality is not satisfied. It is therefore contemplated that not all data sharing may be performed, i.e. only partial data may be shared, e.g. by sorting out a portion of the acquired motion capture data at a plurality of times at predetermined time intervals. At this time, because the motion capture data received by the virtual scene server and the camera server are incomplete, the interactive picture is instantaneously shifted or jammed in the interactive response process. Therefore, in the operation of the interactive system according to the first embodiment or the second embodiment of the present invention, when the data amount of the captured data is too large, and it is not suitable for all the captured data to be shared, the sharing method of the captured data can be optimized, and after the optimization, the network load can be reduced, and the phenomenon of instantaneous shift or blockage of the picture in the virtual reality interactive process can be avoided. Next, the optimization will be described in detail.
Specifically, the motion capture data acquisition system does not share all the motion capture data acquired by the motion capture cameras, that is, only shares a part of the motion capture data, when the motion capture data sharing is performed, for example, when the motion capture data sharing is performed between the camera servers (1011b, 1021b), and when the motion capture data is shared by the camera servers (1011b, 1021b) to the virtual scene server 103. For example, the motion capture camera acquires 5 motion capture data at times T1, T2, T3, T4, and T5. Then it can be considered to sort out a part of the 5 pieces of kinetic capture data (e.g. kinetic capture data at time T2, kinetic capture data at time T5) at preset time intervals and share only the sorted out kinetic capture data, so as to relieve the network load. The preset time intervals may be equal or unequal.
Sharing only partial capture data, while alleviating the network burden, has brought with it the problem of interactive picture stumbling or instant shifting. In order to solve the problem, linear interpolation processing can be performed on the received dynamic capture data at the virtual scene server and the camera server end so as to simulate the dynamic capture data at the moment of non-uploading and perform picture rendering according to the simulated dynamic capture data, thereby avoiding the problem of unsmooth or instantaneous movement of an interactive picture in the interactive process.
When receiving the uploaded selected partial motion capture data, the virtual scene server 103 needs to perform linear difference processing, and the specific linear difference processing method specifically includes:
for example, when receiving the motion capture data at time T2, the virtual scene server 103 determines that the user's position information in the virtual scene is point B based on the motion capture data at time T2. Meanwhile, the virtual scene server also checks the position information of the user in the virtual scene recorded by the virtual scene server as the point A. Then, linear interpolation processing is performed according to the points a and B, the obtained interpolation time interval, and a preset time interval (time difference between T2 and T5) to ensure that the virtual scene server 103 can just receive the moving capture data at the time T5 when the user moves from the point a to the point B.
Specifically, the way in which the virtual scene server 103 performs linear interpolation processing according to the points a and B, the acquired interpolation time interval, and the preset time interval (time difference between T2 and T5) may be:
taking point a as a starting position, point B as an end position, the time difference between T2 and T5 as a preset time interval, and the obtained interpolation time interval calculating interpolation data between point a and point B, i.e., simulated position information of the user, according to the following formula:
xn=xn-1+(Xto the direction of×Tn-1,n)/T0
yn=yn-1+(YTo the direction of×Tn-1,n)/T0
zn=zn-1+(ZTo the direction of×Tn-1,n)/T0
Wherein (x)n,yn,zn) The coordinate of the nth interpolation position in the three-dimensional coordinate system is shown, and n is 1, 2, 3, … …; when n is 1, (x)0,y0,z0) Representing the starting point position coordinates; (X)To the direction of,YTo the direction of,ZTo the direction of) A vector representing a three-dimensional coordinate system from a start position a to an end position B; t isn-1,nRepresents the time (interpolation time interval) required from the (n-1) th interpolation position to the nth interpolation position, which can be set or acquired based on the running platform; t is0Representing a preset time interval. (X)To the direction of,YTo the direction of,ZTo the direction of) The coordinate position of the starting position A and the coordinate position of the starting position B can be obtained.
After the position information of the user between the point A and the point B is simulated by adopting the formula, corresponding response can be carried out according to the simulated position information.
Similarly, the camera server (1011b, 1021b) may also perform simulation of the user location information in the same manner as the virtual scene server, and synchronize the location information obtained by the simulation to the local virtual scene client (1011, 1012), so that the virtual scene client (1011, 1012) can perform rendering of the corresponding virtual scene according to the simulated location information, and further can ensure the fluency of the interactive screen.
The virtual reality interaction system for the remote scene is described in detail with reference to fig. 1 to 2, and a virtual reality interaction method and a computer-readable storage medium for the remote scene using the interaction system are described in detail with reference to the drawings. To avoid redundancy, the terms already described above may not be repeated below.
Fig. 4 is a schematic flowchart of a virtual reality interaction method applied to a different place scene according to a first embodiment of the present invention. The virtual reality interaction method can be operated on the interaction systems shown in fig. 1 and 2, and in the embodiment of the invention, the virtual reality interaction method is described from the virtual scene server side. The virtual reality interaction method comprises the following steps:
in step 401, the virtual scene server receives the kinetic capture data sent by each kinetic capture data acquisition system and the operation command from each virtual scene client.
The virtual reality interaction method runs in the interaction systems shown in fig. 1 and 2. Therefore, the virtual scene server can receive the kinetic capture data sent by at least two kinetic capture data acquisition systems. The at least two dynamic capture data acquisition systems are located in different dynamic capture data acquisition regions, namely, the at least two dynamic capture data acquisition systems are located in different physical regions or in different places, and each dynamic capture data acquisition system at least corresponds to one local virtual scene client. The dynamic capture data is the dynamic capture data of the local user acquired by the dynamic capture data acquisition system. For each dynamic capture data acquisition system, after acquiring local dynamic capture data, the dynamic capture data needs to be synchronized to a local virtual scene client, and meanwhile, the acquired dynamic capture data is transmitted to the dynamic capture data acquisition system and the virtual scene server.
The dynamic capture data specifically includes: rigid body name, rigid body data and rigid body identification number. The terminal equipment receiving the dynamic capturing data can identify the rigid body according to the rigid body name and the rigid body identification number, determine the user to which the rigid body belongs, and simultaneously determine the position information of the user according to the rigid body data.
After the virtual scene server receives the dynamic capture data acquired by the two dynamic capture data acquisition systems, the position information of all users in the virtual scene can be acquired.
For each virtual scene client, the dynamic capture data from the local dynamic capture data acquisition system is received, and meanwhile, the dynamic capture data from other dynamic capture data acquisition systems from the local dynamic capture data acquisition system can be received. That is, the virtual scene client can know location information of all users who are in different physical areas. That is, even if the users are in different physical areas, each client in the virtual scene can know the location information of all users in the virtual scene.
Therefore, the purpose that the dynamic capture data acquisition system synchronizes the acquired local dynamic capture data to the local client, the dynamic capture data acquisition systems in other areas and the virtual scene server is as follows: the dynamic capture data can be shared, so that dynamic capture data of all users in a virtual scene can be shared among different dynamic capture data acquisition systems, and the dynamic capture data acquisition systems and the virtual scene server, and a data sharing effect similar to that of the same local area network is achieved. By sharing the motion capture data, the position information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the virtual reality interaction at different places is ensured, and the immersion sense of the virtual reality interaction is realized. The dynamic capture data acquisition system adopts a parallel mode to acquire dynamic capture data and synchronously sends the acquired dynamic capture data to the local virtual scene client, other dynamic capture data acquisition systems and the virtual scene server, so that the synchronization time can be shortened, the data sharing efficiency is improved, the interaction delay is further reduced, and the interaction experience is improved.
When the dynamic capture data acquisition systems are communicated, P2P communication between the dynamic capture data acquisition systems and the dynamic capture data acquisition systems can be established through the virtual scene server. That is, P2P communication between the kinetic capture data acquisition systems may be established through the virtual scene server prior to receiving the kinetic capture data and operating commands. Furthermore, the dynamic capture data acquisition system of the embodiment of the invention can be an inertial dynamic capture data acquisition system, a laser dynamic capture data acquisition system, an optical dynamic capture data acquisition system or other types of dynamic capture data acquisition systems.
Step 402, the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes a response result to each virtual scene client.
And after the dynamic capture data acquisition system acquires and synchronizes the dynamic capture data, entering the interactive process of the next stage.
Each virtual scene client may also receive an operation command input by a local corresponding user (i.e., a user corresponding to the virtual scene client), and forward the operation command to the virtual scene server. The operation command is an operation instruction of a user on a person or an object in the virtual scene. Specifically, the user may input an operation command to the virtual scene client by means of a handle or an inertial gesture. After receiving the operation command, each virtual scene client converts the operation command into a form which can be recognized by the virtual farm server and transmits the form to the virtual scene server. That is, the virtual scene server can know the operation commands of all users in different physical areas in the virtual scene.
The main functions of the virtual scene server are as follows: controlling the normal operation of the interaction logic. In order to realize normal interaction of users in different physical areas in the same virtual scene, the virtual scene server needs to acquire position information of all users in the virtual scene and operation commands of all users. In view of the fact that the two conditions are implemented in the foregoing description, the virtual scene server may respond accordingly according to the received operation commands of all users and the position information of all users in the virtual scene. And synchronizing the response result to each virtual scene client.
After synchronizing the response result to each virtual scene client, for each virtual scene client that receives the response result, it needs to adjust the corresponding virtual scene according to the response result. The specific adjustment mode is as follows: and the virtual scene client adjusts the virtual scene according to the response result, the position information of all the users in the virtual scene and the visual angle information of the local user (the user corresponding to the client), and displays the adjusted virtual scene to the user. For example, the adjusted virtual scene may be displayed to the user through a helmet worn by the user. Therefore, the interaction of users in different physical areas in the same virtual scene is completed.
It is understood that the dynamic capture data acquisition system is of a variety of types, such as laser, inertial, or optical, and in the following embodiments, the dynamic capture data acquisition system will be described in detail as an example of an optical dynamic capture data acquisition system.
Fig. 5 is a schematic flowchart of a virtual reality interaction method applied to a displaced scene according to a second embodiment of the present invention. The virtual reality interaction method can be operated on the interaction systems shown in fig. 1 and 2. In the embodiment of the invention, the virtual reality interaction method is described from the virtual scene server side. The embodiment of the present invention differs from the embodiment shown in fig. 4 in that: the dynamic capture data acquisition system is an optical dynamic capture acquisition system, and comprises: a plurality of motion capture cameras and a camera server. Therefore, in this step, a specific description is given of the manner in which P2P communication is established by the virtual scene server, and the manner in which the action data is received, which will be described in detail below.
In step 501, the virtual scene server receives a link request from each camera server.
Step 502, the virtual scene server extracts the IP information of the camera server from the link request.
Step 503, the virtual scene server synchronizes the extracted IP information of all the camera servers to each camera server in the network; so that each of the camera servers can establish P2P communication with the other camera servers based on the received IP information.
In step 504, the virtual scene server receives the kinetic capture data from each kinetic capture data acquisition system and the operation command from each virtual scene client.
And 505, the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes a response result to each virtual scene client.
As can be seen from the above steps, the function of the motion capture camera is to collect and transmit the motion capture data of the local user to the corresponding camera server. And the camera server has the following functions: establishing P2P communication with camera servers of other motion capture data acquisition systems, sharing local motion capture data to local virtual scene clients, virtual scene servers, and camera servers in other motion capture data acquisition systems.
It should be noted that, the way for establishing P2P communication between the camera servers may be:
each camera server sends a link request to the virtual scene server, wherein the link request carries the IP information of the camera server. The virtual scene server extracts the IP information of the camera servers from the received link request and synchronizes the extracted IP information of all the camera servers to each online camera server in the network.
For each camera server, after receiving the IP information of all the camera servers from the virtual scene server, a link request is sent to the other camera servers according to the received IP information to establish P2P communication.
After P2P communication is established between the camera servers, the dynamic capture data acquired by the dynamic capture cameras can be shared between the camera servers, so that the camera servers can acquire the dynamic capture data of other camera servers and synchronize to the local virtual scene client when interaction is required; therefore, all virtual scene clients in different physical areas can be guaranteed to be capable of acquiring the motion capture data of all users in virtual scene interaction.
According to the virtual reality interaction method, the mode that a plurality of moving-capture cameras are combined with the camera server is adopted to collect the moving-capture data of the user, the virtual scene server is used as a transfer, and P2P communication between the camera servers is established to realize sharing of the moving-capture data, so that transmission of the moving-capture data can be guaranteed not to be interfered by an external network. Meanwhile, after the virtual scene client receives the operation command of the user, the operation command is uploaded to the virtual scene server. The virtual scene server is used as an interactive control center, responds to the operation commands of the users according to the received operation commands of all the users and the position information (motion capture data) of all the users, and sends the response result to each virtual scene client. And each virtual scene client renders the corresponding virtual scene according to the received response result, the position information of each user and the view angle information of the user corresponding to the client, and displays the rendered virtual scene to the user, so that the virtual reality interaction of multiple users in different scenes in the same virtual scene is realized.
As shown in fig. 6, a schematic block diagram of a server provided in an embodiment of the present invention is shown. As shown in fig. 6, the server 6 of this embodiment includes: one or more processors 60, a memory 61, and a computer program 62 stored in the memory 61 and executable on the processors 60. The processor 60 executes the computer program 62 to implement the steps in the above-mentioned method embodiments of data synchronization, such as steps S401 to S402, or steps S501 to S505.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the server 6.
The server includes, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a server 6, and does not constitute a limitation of server 6, and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the server may also include input devices, output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the server 6, such as a hard disk or a memory of the server 4. The memory 61 may also be an external storage device of the server 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the server 6. Further, the memory 61 may also include both an internal storage unit of the server 6 and an external storage device. The memory 61 is used for storing the computer program and other programs and data required by the server. The memory 61 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The technical solution of the embodiments of the present invention may be substantially implemented or a part or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device or a processor to execute all or part of the steps of the method described in the embodiments of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. Virtual reality interaction system, characterized in that, the system includes: at least two interactive subsystems and a virtual scene server; each of the interaction subsystems comprises: the system comprises a dynamic capture data acquisition system and at least one virtual scene client; one virtual scene client corresponds to one user;
the dynamic capture data acquisition system is used for acquiring dynamic capture data of a local target object and sending the dynamic capture data to the dynamic capture data acquisition systems in the local virtual scene client, the virtual scene server and other interactive subsystems; the dynamic capture data acquisition system can share dynamic capture data of a part of moments in the multiple moments to the dynamic capture data acquisition systems in the virtual scene server and other interactive subsystems according to a preset time interval;
the virtual scene client is used for receiving an operation command of a local corresponding user and transmitting the operation command to the virtual scene server; receiving the dynamic capture data sent by the local dynamic capture data acquisition system and the dynamic capture data from other dynamic capture data acquisition systems sent by the local dynamic capture data acquisition system, wherein the virtual scene client can acquire the position information of all users in different physical areas;
the virtual scene server is used for determining the position information of the user in the virtual scene according to the received operation command and the received motion capture data of the part of time, and taking the position information as the end point information; the position information of the user in the virtual scene currently recorded in the virtual scene server is used as starting point information; performing linear interpolation processing according to the starting point position, the end point position, the obtained interpolation time interval and the preset time interval to simulate the position information of the user between the starting point position and the end point position, performing corresponding response according to the position information, and synchronizing the response result to each virtual scene client;
and the virtual scene client is used for adjusting the corresponding virtual scene according to the response result, the position information of all users in the virtual scene and the visual angle information of the local user, and displaying the adjusted virtual scene to the users so as to complete the interaction of the users in different physical areas in the same virtual scene.
2. The virtual reality interaction system of claim 1, wherein the kinetic capture data acquisition system is further configured to establish P2P communication with kinetic capture data acquisition systems in other interaction subsystems.
3. The virtual reality interaction system of claim 2, the kinetic capture data acquisition system being an optical kinetic capture acquisition system comprising: a plurality of motion capture cameras and camera servers;
the mobile capturing camera is used for acquiring mobile capturing data of a local target object and transmitting the data to the camera server;
the camera server is specifically configured to establish P2P communication with camera servers in other motion capture data acquisition systems, synchronize the motion capture data to a local virtual scene client, and upload the motion capture data to the virtual scene server and camera servers in other motion capture data acquisition systems.
4. The virtual reality interaction system of claim 3, wherein when the camera server establishes P2P communication with camera servers in other interaction subsystems, the camera server is specifically configured to:
sending a link request to the virtual scene server; the link request carries IP information of the camera server; so that the virtual scene server synchronizes the received IP information of all the camera servers to each camera server in the network;
the camera server is further used for receiving the IP information of all the camera servers transmitted by the virtual scene server and establishing P2P communication with other camera servers according to the IP information.
5. The virtual reality interaction system of claim 1, wherein the kinetic data comprises: rigid body name, rigid body data, and rigid body identification number.
6. The virtual reality interaction system of claim 1, wherein the preset time intervals may be equal or unequal.
7. The virtual reality interaction system of any one of claims 1 to 6, wherein the camera server is specifically configured to determine position information of the user in the virtual scene according to the motion capture data received at the current time, and use the position information as the endpoint information; and using the current position information of the user recorded in the virtual scene server as starting point information; and performing linear interpolation processing according to the starting point position, the end point position, the obtained interpolation time interval and the preset time interval to simulate the position information of the user between the starting point position and the end point position, and synchronizing the position information to a local virtual scene client.
8. The virtual reality interaction method is applied to a virtual reality interaction system, and comprises the following steps:
the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system and receives an operation command from each virtual scene client; the dynamic capture data acquisition system comprises at least two dynamic capture data acquisition systems, each dynamic capture data acquisition system at least corresponds to one local virtual scene client, and the virtual scene clients can acquire the position information of all users in different physical areas; the dynamic capture data acquisition system specifically shares dynamic capture data of a part of moments in the multiple moments to dynamic capture data acquisition systems in the virtual scene server and other interactive subsystems according to a preset time interval;
the virtual scene server determines the position information of the user in the virtual scene according to the received operation command and the received motion capture data of the part of time, and takes the position information as the end point information; the position information of the user in the virtual scene currently recorded in the virtual scene server is used as starting point information; performing linear interpolation processing according to the starting point position, the end point position, the obtained interpolation time interval and the preset time interval to simulate the position information of the user between the starting point position and the end point position, performing corresponding response according to the position information, and synchronizing the response result to each virtual scene client; and the virtual scene client can adjust the corresponding virtual scene according to the response result, the position information of all users in the virtual scene and the visual angle information of the local user, and display the adjusted virtual scene to the users so as to finish the interaction of the users in different physical areas in the same virtual scene.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 8.
10. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method as claimed in claim 8 when executing the computer program.
CN202210083807.0A 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium Active CN114527872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210083807.0A CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/CN2017/099011 WO2019037074A1 (en) 2017-08-25 2017-08-25 Virtual reality interaction system and method, and computer storage medium
CN202210083807.0A CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium
CN201780000973.7A CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201780000973.7A Division CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Publications (2)

Publication Number Publication Date
CN114527872A true CN114527872A (en) 2022-05-24
CN114527872B CN114527872B (en) 2024-03-08

Family

ID=65205393

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210083807.0A Active CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium
CN201780000973.7A Active CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201780000973.7A Active CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Country Status (2)

Country Link
CN (2) CN114527872B (en)
WO (1) WO2019037074A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110108159B (en) * 2019-06-03 2024-05-17 武汉灏存科技有限公司 Simulation system and method for large-space multi-person interaction
CN110471772B (en) * 2019-08-19 2022-03-15 上海云绅智能科技有限公司 Distributed system, rendering method thereof and client
CN110610547B (en) * 2019-09-18 2024-02-13 瑞立视多媒体科技(北京)有限公司 Cabin practical training method, system and storage medium based on virtual reality
CN110609622A (en) * 2019-09-18 2019-12-24 深圳市瑞立视多媒体科技有限公司 Method, system and medium for realizing multi-person interaction by combining 3D and virtual reality technology
CN110989837B (en) * 2019-11-29 2023-03-24 上海海事大学 Virtual reality system for passenger liner experience
CN111047710B (en) * 2019-12-03 2023-12-26 深圳市未来感知科技有限公司 Virtual reality system, interactive device display method, and computer-readable storage medium
CN111338481B (en) * 2020-02-28 2023-06-23 武汉灏存科技有限公司 Data interaction system and method based on whole body dynamic capture
CN111381792B (en) * 2020-03-12 2023-06-02 上海曼恒数字技术股份有限公司 Virtual reality data transmission method and system supporting multi-user cooperation
CN112423020B (en) * 2020-05-07 2022-12-27 上海哔哩哔哩科技有限公司 Motion capture data distribution and acquisition method and system
CN111796670A (en) * 2020-05-19 2020-10-20 北京北建大科技有限公司 Large-space multi-person virtual reality interaction system and method
CN111988375B (en) * 2020-08-04 2023-10-27 瑞立视多媒体科技(北京)有限公司 Terminal positioning method, device, equipment and storage medium
CN112130660B (en) 2020-08-14 2024-03-15 青岛小鸟看看科技有限公司 Interaction method and system based on virtual reality all-in-one machine
CN112150246A (en) * 2020-09-25 2020-12-29 刘伟 3D data acquisition system and application thereof
CN112256125B (en) * 2020-10-19 2022-09-13 中国电子科技集团公司第二十八研究所 Laser-based large-space positioning and optical-inertial-motion complementary motion capture system and method
CN114051148A (en) * 2021-11-10 2022-02-15 拓胜(北京)科技发展有限公司 Virtual anchor generation method and device and electronic equipment
CN115114537B (en) * 2022-08-29 2022-11-22 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090043192A (en) * 2007-10-29 2009-05-06 (주)인텔리안시스템즈 Remote controlling system and method of operating the system
KR20130095904A (en) * 2012-02-21 2013-08-29 (주)드리밍텍 Virtual environment management system and server thereof
CN105450736A (en) * 2015-11-12 2016-03-30 小米科技有限责任公司 Method and device for establishing connection with virtual reality
US20160098601A1 (en) * 2014-04-10 2016-04-07 Huizhou Tcl Mobile Communication Co., Ltd. Method and system for a mobile terminal to achieve user interaction by simulating a real scene
US20160140752A1 (en) * 2014-11-13 2016-05-19 Utherverse Digital Inc. System, method and apparatus of simulating physics in a virtual environment
US20160192029A1 (en) * 2014-12-26 2016-06-30 Mattias Bergstrom Method and system for adaptive virtual broadcasting of digital content
CN105892686A (en) * 2016-05-05 2016-08-24 刘昊 3D virtual-real broadcast interaction method and 3D virtual-real broadcast interaction system
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam
CN107024995A (en) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 Many people's virtual reality interactive systems and its control method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8730156B2 (en) * 2010-03-05 2014-05-20 Sony Computer Entertainment America Llc Maintaining multiple views on a shared stable virtual space
CA2684487C (en) * 2007-04-17 2017-10-24 Bell Helicopter Textron Inc. Collaborative virtual reality system using multiple motion capture systems and multiple interactive clients
CN104469442A (en) * 2014-11-21 2015-03-25 天津思博科科技发展有限公司 Device for achieving collective singing through intelligent terminal
CN104866101B (en) * 2015-05-27 2018-04-27 世优(北京)科技有限公司 The real-time interactive control method and device of virtual objects
CN105323129B (en) * 2015-12-04 2019-02-12 上海弥山多媒体科技有限公司 A kind of family's virtual reality entertainment systems
CN106125903B (en) * 2016-04-24 2021-11-16 林云帆 Multi-person interaction system and method
CN106534125B (en) * 2016-11-11 2021-05-04 厦门汇鑫元软件有限公司 Method for realizing VR multi-person interactive system based on local area network
CN106774949A (en) * 2017-03-09 2017-05-31 北京神州四达科技有限公司 Collaborative simulation exchange method, device and system
CN106843507B (en) * 2017-03-24 2024-01-05 苏州创捷传媒展览股份有限公司 Virtual reality multi-person interaction method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090043192A (en) * 2007-10-29 2009-05-06 (주)인텔리안시스템즈 Remote controlling system and method of operating the system
KR20130095904A (en) * 2012-02-21 2013-08-29 (주)드리밍텍 Virtual environment management system and server thereof
US20160098601A1 (en) * 2014-04-10 2016-04-07 Huizhou Tcl Mobile Communication Co., Ltd. Method and system for a mobile terminal to achieve user interaction by simulating a real scene
US20160140752A1 (en) * 2014-11-13 2016-05-19 Utherverse Digital Inc. System, method and apparatus of simulating physics in a virtual environment
US20160192029A1 (en) * 2014-12-26 2016-06-30 Mattias Bergstrom Method and system for adaptive virtual broadcasting of digital content
CN105450736A (en) * 2015-11-12 2016-03-30 小米科技有限责任公司 Method and device for establishing connection with virtual reality
CN105892686A (en) * 2016-05-05 2016-08-24 刘昊 3D virtual-real broadcast interaction method and 3D virtual-real broadcast interaction system
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN107024995A (en) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 Many people's virtual reality interactive systems and its control method

Also Published As

Publication number Publication date
CN109313484A (en) 2019-02-05
CN109313484B (en) 2022-02-01
WO2019037074A1 (en) 2019-02-28
CN114527872B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109313484B (en) Virtual reality interaction system, method and computer storage medium
US9947139B2 (en) Method and apparatus for providing hybrid reality environment
CN108986189B (en) Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation
US11380078B2 (en) 3-D reconstruction using augmented reality frameworks
JP2018036955A (en) Image processor, image processing method, and program
CN105429989A (en) Simulative tourism method and system for virtual reality equipment
CN110728739B (en) Virtual human control and interaction method based on video stream
CN111627116A (en) Image rendering control method and device and server
US20200097732A1 (en) Markerless Human Movement Tracking in Virtual Simulation
WO2018103233A1 (en) Virtual reality-based viewing method, device, and system
CN102939139A (en) Calibration of portable devices in shared virtual space
CN109126121B (en) AR terminal interconnection method, system, device and computer readable storage medium
WO2019085829A1 (en) Method and apparatus for processing control system, and storage medium and electronic apparatus
CN107479701B (en) Virtual reality interaction method, device and system
CN113515187B (en) Virtual reality scene generation method and network side equipment
CN109840948B (en) Target object throwing method and device based on augmented reality
CN113454685A (en) Cloud-based camera calibration
CN115987806A (en) Model-driven cloud edge collaborative immersive content reproduction method and device
CN111562841B (en) Off-site online method, device, equipment and storage medium of virtual reality system
CN113093915A (en) Multi-person interaction control method, device, equipment and storage medium
CN214851294U (en) Virtual reality equipment management system based on WIFI6 router
US12033355B2 (en) Client/server distributed camera calibration
KR102308347B1 (en) Synchronization device for camera and synchronization method for camera
US20240193218A1 (en) Offloading slam processing to a remote device
CN116980556A (en) Virtual image display method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant