CN114527872B - Virtual reality interaction system, method and computer storage medium - Google Patents

Virtual reality interaction system, method and computer storage medium Download PDF

Info

Publication number
CN114527872B
CN114527872B CN202210083807.0A CN202210083807A CN114527872B CN 114527872 B CN114527872 B CN 114527872B CN 202210083807 A CN202210083807 A CN 202210083807A CN 114527872 B CN114527872 B CN 114527872B
Authority
CN
China
Prior art keywords
virtual scene
capture data
dynamic capture
server
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210083807.0A
Other languages
Chinese (zh)
Other versions
CN114527872A (en
Inventor
崔永太
谢冰
肖乐天
陈明洋
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202210083807.0A priority Critical patent/CN114527872B/en
Publication of CN114527872A publication Critical patent/CN114527872A/en
Application granted granted Critical
Publication of CN114527872B publication Critical patent/CN114527872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual reality interaction system, a method and a server. The method comprises the following steps: the virtual scene server receives dynamic capture data sent by each dynamic capture data acquisition system and an operation command from each virtual scene client; the dynamic capture data acquisition system comprises at least two dynamic capture data acquisition systems, and each dynamic capture data acquisition system at least corresponds to a local virtual scene client; the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes the response result to each virtual scene client; so that the virtual scene client can adjust the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data transmitted by the local dynamic capture data acquisition system from other dynamic capture data acquisition systems, and display the adjusted virtual scene to the user. The invention can realize the interaction of different users in different scenes in the same virtual scene.

Description

Virtual reality interaction system, method and computer storage medium
Technical Field
The invention belongs to the technical field of virtual reality interaction, and particularly relates to a virtual interaction system, a virtual interaction method and a computer storage medium for a remote scene.
Background
Currently, the flow of virtual reality interactions is generally: and acquiring dynamic capture data (three-dimensional space position) of the user, and transmitting the dynamic capture data to a server of the virtual scene. And the server determines the position information of the user in the virtual scene according to the dynamic capture data, so that corresponding interactive response is carried out, and the response result is synchronously displayed to the user. In the virtual reality interaction process, various acquisition modes of dynamic capturing data can be adopted, such as inertial dynamic capturing, laser dynamic capturing or optical dynamic capturing.
In virtual reality interaction based on the optical dynamic capturing technology, optical mark points attached to an observed object can be identified by utilizing a plurality of dynamic capturing cameras in an optical dynamic capturing system, coordinate position information (namely dynamic capturing data) of the mark points is processed and calculated by an image acquisition system of the dynamic capturing cameras, and then the coordinate position information is transmitted to a server of the dynamic capturing cameras through a network (wired, wireless, USB and the like). The camera server receives the coordinate position information from the automatic camera capturing, identifies the observed object according to the position coordinate information, acquires the position information of the user in the physical scene, and then sends the position information in the physical scene to the server and the client of the virtual scene. The server of the virtual scene maps the position information into the virtual scene, so that the position information of the user in the virtual scene is determined, and the position information is displayed to the user through the client of the virtual scene.
Currently, in the above-mentioned virtual reality interaction flow, as shown in fig. 3, the trend of the dynamic capturing data is specifically: the virtual scene server 31 and the virtual scene client 32 both acquire corresponding dynamic capturing data from the optical dynamic capturing system 33, and because the communication modes and the synchronization modes of the virtual scene server 31, the client 32 and the optical dynamic capturing system 33 are developed based on the local area network, the communication mode of the current system can realize virtual reality interaction in the same physical space.
With further application of the virtual reality interaction technology, there is a need for users in different places to realize virtual reality interaction in the same virtual scene, but no good solution exists at present.
Disclosure of Invention
In view of the above, the invention provides a synchronous virtual reality interaction system which can realize the interaction of different users in different places in the same virtual scene.
A first aspect of an embodiment of the present invention provides a virtual reality interaction system, the system including: at least two interaction subsystems and a virtual scene server; the virtual scene server operates on a wide area network; the interaction subsystem comprises: the dynamic capture data acquisition system and the at least one virtual scene client;
The dynamic capture data acquisition system is used for acquiring local dynamic capture data and transmitting the dynamic capture data to the local dynamic capture data acquisition system in the virtual scene client, the virtual scene server and other interaction subsystems;
the virtual scene client is used for receiving an operation command of a local corresponding user and transmitting the operation command to the virtual scene server; receiving dynamic capture data transmitted by the local dynamic capture data acquisition system and dynamic capture data transmitted by other dynamic capture data acquisition systems transmitted by the local dynamic capture data acquisition system;
the virtual scene server is used for responding correspondingly according to the received operation commands transmitted by all the virtual scene clients and the dynamic capture data transmitted by all the dynamic capture data acquisition systems, and synchronizing the response results to each virtual scene client;
the virtual scene client is used for adjusting the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data transmitted by the local dynamic capture data acquisition system from other dynamic capture data acquisition systems, and displaying the adjusted virtual scene to the user.
The dynamic capture data acquisition system is also used for establishing P2P communication with dynamic capture data acquisition systems in other interaction subsystems.
Wherein, move and catch data acquisition system is the optics and move and catch acquisition system, include: a plurality of dynamic capture cameras and a camera server;
the dynamic capture camera is used for collecting dynamic capture data of a local target object and transmitting the dynamic capture data to the camera server;
the camera server is specifically configured to establish P2P communication with camera servers in other dynamic capture data acquisition systems, synchronize the dynamic capture data to a local virtual scene client, and upload the dynamic capture data to the virtual scene server and the camera servers in other dynamic capture data acquisition systems.
Wherein, when the camera server establishes P2P communication with the camera server in the other interaction subsystem, the camera server is specifically configured to:
sending a link request to the virtual scene server; the link request carries IP information of the camera server; so that the virtual scene server synchronizes the received IP information of all camera servers to each camera server in the network;
the camera server is also used for receiving the IP information of all camera servers transmitted by the virtual scene server and establishing P2P communication with other camera servers according to the IP information.
Wherein, the dynamic capture data comprises: rigid body name, rigid body data, and rigid body identification number.
The dynamic capture data comprises dynamic capture data of a plurality of moments, and the dynamic capture data acquisition system is specifically used for:
and transmitting the dynamic capture data of part of the moments to the dynamic capture data acquisition system in the virtual scene server and other interaction subsystems according to a preset time interval.
The virtual scene server is specifically configured to determine position information of a user in a virtual scene according to dynamic capture data received at a current moment, and take the position information as terminal information; and taking the position information of the user recorded in the virtual scene server as starting point information; performing linear interpolation processing according to the starting point position, the end point position, the acquired interpolation time interval and the preset time interval to simulate other position information of the user between the starting point position and the end point position so as to perform corresponding response;
the camera server is specifically configured to determine position information of a user in a virtual scene according to dynamic capture data received at a current moment, and take the position information as terminal information; and taking the current position information of the user recorded in the virtual scene server as starting point information; and performing linear interpolation processing according to the starting point position, the end point position, the acquired interpolation time interval and the preset time interval to simulate other position information of the user between the starting point position and the end point position, and synchronizing the information to a local virtual scene client.
A second aspect of an embodiment of the present invention provides a virtual reality interaction method, the method comprising:
the virtual scene server receives dynamic capture data sent by each dynamic capture data acquisition system and an operation command from each virtual scene client; the dynamic capture data acquisition system comprises at least two dynamic capture data acquisition systems, and each dynamic capture data acquisition system at least corresponds to a local virtual scene client;
the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes the response result to each virtual scene client; so that the virtual scene client can adjust the corresponding virtual scene according to the response result, the dynamic capture data acquired by the local dynamic capture data acquisition system and the dynamic capture data transmitted by the local dynamic capture data acquisition system from other dynamic capture data acquisition systems, and display the adjusted virtual scene to the user.
Before the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system, the method further comprises the following steps:
and the virtual scene server establishes P2P communication between the dynamic capture data acquisition systems.
Wherein, move and catch data acquisition system is the optics and move and catch acquisition system, include: a plurality of dynamic capture cameras and a camera server; the virtual scene server receives dynamic capture data from each dynamic capture data acquisition system, and specifically comprises:
the virtual scene server receives dynamic capture data from the camera server; the dynamic capturing data are the dynamic capturing data of the local target object collected by the dynamic capturing camera.
The virtual scene server establishes P2P communication between the dynamic capture data acquisition systems, and specifically comprises the following steps:
the virtual scene server receives a link request sent by each camera server;
the virtual scene server extracts IP information of the camera server from the link request;
the virtual scene server synchronizes the extracted IP information of all camera servers to each of the camera servers in the network so that each of the camera servers can establish P2P communication with other camera servers according to the received IP information.
Wherein, the dynamic capture data comprises: rigid body name, rigid body data, and rigid body identification number.
A third aspect of an embodiment of the present invention provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of any one of the virtual reality interaction methods described above when executing the computer program.
A fourth aspect of the embodiments of the present invention provides the computer-readable storage medium storing a computer program, which when executed by a processor, implements the steps of any one of the virtual reality interaction methods described above.
Compared with the prior art, the invention has the beneficial effects that:
according to the technical scheme provided by the invention, after the virtual scene client receives the operation command of the user, the operation command is uploaded to the virtual scene server. The virtual scene server serves as a control center, and corresponds to the operation commands of the users according to the received operation commands of all the users and the received position information (dynamic capture data) of all the users, and transmits response results to each virtual scene client. And each virtual scene client renders the corresponding virtual scene according to the received response result and the visual angle information of each user and the user corresponding to the client, and displays the visual angle information to the user, so that the virtual reality interaction of multiple users of different places in the same virtual scene is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a first embodiment of a virtual reality interaction system provided by the present invention;
FIG. 2 is a flow chart of a second embodiment of a virtual reality interaction system provided by the present invention;
FIG. 3 is a flow diagram of dynamic capture data in prior art virtual reality interactions;
fig. 4 is a schematic flow chart of a first embodiment of a virtual reality interaction method provided by the present invention;
fig. 5 is a schematic flow chart of a second embodiment of a virtual reality interaction method provided by the present invention;
fig. 6 is a schematic block diagram of an embodiment of a server provided by the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
The virtual reality interaction scheme provided by the embodiment of the invention is suitable for virtual reality interaction of different places, namely, users in different physical areas can realize interaction in the same virtual scene. Users of different physical areas are also referred to herein as: a user in a different place, a user in a different place scene, or a user under a different dynamic capture system. Hereinafter, detailed description will be made by way of specific examples.
The virtual reality interaction system provided by the invention is used for realizing the interaction of users in different physical areas in the same virtual scene, and comprises the following steps: at least two interaction subsystems, and a virtual scene server. Each interaction subsystem includes: the dynamic capture data acquisition system and the at least one virtual scene client. In the following, the detailed description will be given taking an example that the interactive system 100 includes two interactive subsystems, each including one virtual scene client. Wherein, a virtual scene client corresponds to a user, and the virtual scene client can receive an operation command input by the user corresponding to the virtual scene client.
As shown in fig. 1, which is a block diagram of a first embodiment of a virtual reality interaction system provided by the present invention, the interaction system 100 is configured to implement interactions between users located in different physical areas in the same virtual scene, and includes: a dynamic capture data acquisition system 1011 located in the first region 11, and a virtual scene client 1012 located in the same region as the dynamic capture data acquisition system 1011. The dynamic capture data acquisition system 1021 in the second area 12, a virtual scene client 1022 in the same area as the dynamic capture data acquisition system 1021, and a virtual scene server 103 running in a wide area network. Wherein the first region 11 and the second region 12 are different physical regions.
The dynamic capture data acquisition system 1011 acquires first dynamic capture data of a local target object (the target object includes a user or other objects such as a game gun, and the target object is described hereinafter as a specific example of the user), and synchronizes the first dynamic capture data to the virtual scene client 1012, and the dynamic capture data acquisition system 1011 also transmits the acquired first dynamic capture data to the dynamic capture data acquisition system 1021 and the virtual scene server 103. Similarly, the motion capture data collection system 1021 collects second motion capture data of a local target object (the target object includes a user or other objects such as a game gun, and the like, which will be described hereinafter with reference to the target object as a specific example), and synchronizes the second motion capture data to the client 1022, and the motion capture data collection system 1021 also transmits the collected second motion capture data to the motion capture data collection system 1011 and the virtual scene server 103.
The dynamic capturing data specifically may include: rigid body name, rigid body data, and rigid body identification number. The terminal equipment receiving the dynamic capture data can identify the rigid body according to the name and the identification number of the rigid body, determine the user to which the rigid body belongs, and determine the position information of the user according to the rigid body data.
The virtual scene server 103 receives the first dynamic capture data collected by the dynamic capture data collection system 1011 and the second dynamic capture data collected by the dynamic capture data collection system 1021, so as to obtain the position information of all users in the virtual scene.
For the virtual scene client 1012, on the one hand, the first dynamic capture data transmitted by the local dynamic capture data acquisition system 1011 can be received, and on the other hand, the second dynamic capture data transmitted by the local dynamic capture data acquisition system 1011 to automatically capture the data acquisition system 1021 can be received. That is, the virtual scene client 1012 can learn location information of all users in different physical areas. For the virtual scene client 1022, on the one hand, it receives the second dynamic capture data transmitted from the local dynamic capture data acquisition system 1021, and at the same time, it can also receive the first dynamic capture data transmitted from the local dynamic capture data acquisition system 1021 to automatically capture the data acquisition system 1011. That is, the virtual scene client 1022 can learn location information of all users in different physical areas. That is, each client in the virtual scene is able to learn the location information of all users in the virtual scene even though the users are in different physical areas.
Therefore, the dynamic capture data acquisition system synchronizes the acquired dynamic capture data of the user to the local client, the dynamic capture data acquisition system of other areas and the virtual scene server, and the aims are as follows: the dynamic capture data sharing is realized, so that the dynamic capture data of all users in the virtual scene can be shared among different dynamic capture data acquisition systems, the dynamic capture data acquisition systems and the virtual scene server, and the data sharing effect similar to that of the same local area network is achieved. Through the sharing of dynamic capturing data, the position information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the remote virtual reality interaction is ensured, and the immersion sense of the virtual reality interaction is realized. Because the dynamic capture data acquisition system synchronously acquires dynamic capture data to the local virtual scene client, other dynamic capture data acquisition systems and the virtual scene server in a parallel mode, the synchronization time can be shortened, the data sharing efficiency is improved, the interaction time delay is further reduced, and the interaction experience is improved.
The dynamic capture data acquisition system 1011 and the dynamic capture data acquisition system 1021 may use a P2P communication method when communicating. When P2P communication between the dynamic capture data acquisition system 1011 and the dynamic capture data acquisition system 1021 is established, the dynamic capture data acquisition system 1011 and the dynamic capture data acquisition system 1021 actively transmit a link request to the virtual scene server 103 respectively, and the link request carries own IP information. The virtual scene server 103, after receiving the link request, extracts the IP information therein and synchronizes all the extracted IP information to the current online dynamic capture data acquisition system (1011, 1021). After all IP information is received by each dynamic capture data acquisition system, the dynamic capture data acquisition system can establish P2P communication connection with other dynamic capture data acquisition systems.
It can be appreciated that the dynamic capture data acquisition system of the embodiments of the present invention may be an inertial dynamic capture data acquisition system, a laser dynamic capture data acquisition system, an optical dynamic capture data acquisition system, or other types of dynamic capture data acquisition systems.
After the dynamic capture data acquisition system acquires and synchronizes the dynamic capture data, the next stage of interaction process is started.
The virtual scene client 1012 may also receive an operation command input by a local corresponding user (i.e., a user corresponding to the virtual scene client 1012), and forward the operation command to the virtual scene server 103. Similarly, the virtual scene client 1022 may also receive an operation command input by a local corresponding user (a user corresponding to the virtual scene client 1022), and forward the operation command to the virtual scene server 103. The operation command is an operation instruction of a user on a person or object in the virtual scene.
Specifically, the user may input an operation command to the virtual scene client by means of a handle or an inertial gesture. Each virtual scene client, after receiving the operation command, converts the operation command into a form that can be recognized by the virtual scene server 103 and transmits the operation command to the virtual scene server 103. That is, the virtual scene server 103 can learn the operation commands of all users in different physical areas in the virtual scene.
The main functions of the virtual scene server 103 are: and controlling the normal running of the interaction logic. To achieve normal interaction of users in different physical areas in the same virtual scene, the virtual scene server 103 needs to acquire the position information of all users in the virtual scene and the operation commands of all users. In view of the fact that these two conditions have been implemented in the foregoing description, the virtual scene server 103 can respond accordingly according to the received operation commands of all users and the location information of all users in the virtual scene. And synchronizes the response results to each virtual scene client, such as synchronization to virtual scene client 1012 and virtual scene client 1022.
After synchronizing the response results to the virtual scene client 1012 and the virtual scene client 1022, for each virtual scene client that receives the response results, it needs to make an adjustment of the corresponding virtual scene according to the response results. The specific adjustment mode is as follows: the virtual scene client adjusts the virtual scene according to the response result, the position information (namely the first dynamic capture data and the second dynamic capture data) of all users in the virtual scene and the visual angle information of the local user (the user corresponding to the client), and displays the adjusted virtual scene to the user. For example, the adjusted virtual scene may be displayed to the user through a helmet worn by the user. Thus, the interaction of the users in different physical areas in the same virtual scene is completed.
It will be appreciated that the dynamic capture data acquisition system is of a plurality of types, such as laser, inertial or optical, and in the following embodiments, the dynamic capture data acquisition system will be described in detail by taking the dynamic capture data acquisition system as an optical dynamic capture data acquisition system.
Referring to fig. 2, which is a block diagram of a second embodiment of a virtual reality interaction system provided by the present invention, the difference between the embodiment of the present invention and the embodiment shown in fig. 1 is that: the embodiment of the invention embodies the structure of the dynamic capture data acquisition system. The dynamic capture data acquisition system of the embodiment of the invention is specifically an optical dynamic capture acquisition system, and each optical dynamic capture acquisition system comprises: a camera server and a plurality of dynamic capture cameras. Next, a specific description will be given.
As shown in fig. 2, the dynamic capture data acquisition system 1011 specifically includes: a plurality of motion capture cameras 1011a and a camera server 1011b. Similarly, the dynamic capture data acquisition system 1021 specifically includes: a plurality of motion capture cameras 1021a and a camera server 1021b.
The dynamic capturing camera is used for collecting dynamic capturing data of a local user and transmitting the dynamic capturing data to a corresponding camera server. Specifically, a plurality of dynamic capture cameras 1011a are used to collect the first dynamic capture data of the local user and transmit to the camera server 1011b. The plurality of capturing cameras 1021a are configured to collect second capturing data of the local user and transmit the second capturing data to the camera server 1021b.
Wherein, the camera server has the functions of: P2P communication is established with camera servers of other dynamic capture data acquisition systems, and local dynamic capture data is shared to a local virtual scene client, a virtual scene server and the camera servers in other dynamic capture data acquisition systems. Specifically, the camera server 1011b is configured to establish P2P communication with the camera server 1021b, and simultaneously transmit the first motion capture data collected by the motion capture camera 1011a to the local virtual scene client 1012, the virtual scene server 103, and the camera server 1021b. Similarly, the camera server 1021b is configured to establish P2P communication with the camera server 1011b, and also transmit the second motion capture data acquired by the motion capture camera 1021a to the local virtual scene client 1022, the virtual scene server 103, and the camera server 1011b.
Note that, the manner of establishing P2P communication between the camera server 1021b and the camera server 1011b may be:
the camera server 1021b sends a link request to the virtual scene server 103, wherein the link request carries the IP information of the camera server 1021 b; the camera server 1011b also transmits a link request to the virtual scene server 103, the link request carrying IP information of the camera server 1011b. The virtual scene server 103 synchronizes the received IP information of the camera server 1021b and the received IP information of the camera server 1011b to the online camera servers (i.e., the camera server 1021b and the camera server 1011 b) in the network.
For the camera server 1021b, after receiving the IP information of the camera server 1021b and the IP information of the camera server 1011b, a connection request is initiated to the camera server 1011b according to the IP information of the camera server 1011b to establish P2P communication. Also, for the camera server 1011b, it initiates a connection request to the camera server 1021b according to the IP information of the camera server 1021b after receiving the IP information of the camera server 1021b and the IP information of the camera server 1011b to establish P2P communication.
After the P2P communication is established between the camera servers, the sharing of the dynamic capture data collected by the dynamic capture cameras (1011 a,1021 a) between the camera servers can be realized, so that the camera server 1021b can acquire the dynamic capture data of the camera server 1011b and synchronize to the local virtual scene client 1022 when the interaction needs; likewise, the camera server 1011b may learn the motion capture data of the camera server 1021b and synchronize to the local virtual scene client 1012 when interaction is desired. Therefore, each virtual scene client in different physical areas can be ensured to acquire dynamic capture data of all users in virtual scene interaction.
According to the virtual reality interaction system provided by the embodiment of the invention, the dynamic capturing data of the user is collected by adopting a mode of combining a plurality of dynamic capturing cameras with the camera servers, and the P2P communication between the camera servers is established by taking the virtual scene servers as transfer, so that the sharing of the dynamic capturing data is realized, and the transmission of the dynamic capturing data can be ensured not to be interfered by an external network. Meanwhile, after the virtual scene client receives the operation command of the user, the operation command is uploaded to the virtual scene server. The virtual scene server is used as an interaction control center, responds to the operation commands of the users according to the received operation commands of all the users and the received position information (dynamic capture data) of all the users, and transmits the response results to each virtual scene client. And each virtual scene client renders the corresponding virtual scene according to the received response result, the position information of each user and the visual angle information of the user corresponding to the client, and displays the corresponding virtual scene to the user, so that the virtual reality interaction of multiple users in different scenes in the same virtual scene is realized.
It can be appreciated that during the virtual reality interaction, the user moves in real time, so that the dynamic capture data collected by the dynamic capture data collection system is continuous, that is, the dynamic capture data includes dynamic capture data of a plurality of moments. After the dynamic capture data acquisition system acquires dynamic capture data, the acquired dynamic capture data at a plurality of moments are generally required to be synchronized so as to ensure the integrity of the dynamic capture data.
The exceptional case is that if the data volume of the dynamic capture data collected by the dynamic capture data is large, the total shared collected data can bring serious load to the network bandwidth, and when the network environment is not good, response delay is caused, so that the real-time interaction effect of virtual reality is not satisfied. Therefore, it is conceivable that not all data is shared, that is, only partial data is shared, for example, data acquired at a plurality of times is partially selected at predetermined time intervals and shared. At this time, since the dynamic capture data received by the virtual scene server and the camera server are incomplete, the interactive picture is moved or blocked instantaneously in the interactive response process. Therefore, in the working of the interactive system according to the first embodiment or the second embodiment of the present invention, when the data amount of the dynamic capture data is too large, and the dynamic capture data is not suitable for all sharing, the sharing mode of the dynamic capture data can be optimized at this time, and after the optimization, the network load can be reduced, and meanwhile, the phenomenon of instantaneous shifting or blocking of the picture in the virtual reality interaction process can be avoided. Next, the optimization will be described in detail.
Specifically, when the dynamic capture data acquisition system performs dynamic capture data sharing, for example, when the dynamic capture data is shared between the camera servers (521 b, 511 b), and when the camera servers (521 b, 511 b) share the dynamic capture data to the virtual scene server 103, all the dynamic capture data acquired by the dynamic capture camera is not shared, that is, only a part of the dynamic capture data is shared. For example, the dynamic capture cameras collect 5 dynamic capture data at times T1, T2, T3, T4, and T5. Then, a part of the 5 dynamic capture data (such as the dynamic capture data at the time T2 and the dynamic capture data at the time T5) can be selected according to the preset time interval and only the selected dynamic capture data is shared, so that the network load can be relieved. The preset time intervals may be equal or unequal.
The sharing of partial transfer captured data only relieves the network load, but brings the problem of interactive picture blocking or instant movement. In order to solve the problem, the method can consider that the received dynamic capture data is subjected to linear interpolation processing at a virtual scene server and a camera server end so as to simulate the dynamic capture data at the moment of no uploading and perform picture rendering according to the simulated dynamic capture data, thereby avoiding the problem of interaction picture blocking or instant movement in the interaction process.
When receiving the uploaded selected partial dynamic capture data, the virtual scene server 103 needs to perform linear difference processing, and the specific linear difference processing method specifically includes:
for example, when receiving the live action data at the time T2, the virtual scene server 103 determines that the position information of the user in the virtual scene is point B based on the live action data at the time T2. Meanwhile, the virtual scene server also checks the position information of the user in the virtual scene recorded by the virtual scene server as the point A. Then, linear interpolation processing is performed according to the point a and the point B, the obtained interpolation time interval and the preset time interval (time difference between T2 and T5), so as to ensure that the virtual scene server 103 can just receive the dynamic capture data at the moment T5 when the user moves from the point a to the point B.
Specifically, the manner in which the virtual scene server 103 performs linear interpolation processing according to the points a and B, the acquired interpolation time interval, and the preset time interval (time difference between T2 and T5) may be:
taking the point A as a starting point position, the point B as an end point position, taking the time difference between T2 and T5 as a preset time interval, and calculating interpolation data between the point A and the point B according to the acquired interpolation time interval by the following formula, namely, simulating the position information of the user:
x n =x n-1 +(X to the direction of ×T n-1,n )/T 0
y n =y n-1 +(Y To the direction of ×T n-1,n )/T 0
z n =z n-1 +(Z To the direction of ×T n-1,n )/T 0
Wherein, (x) n ,y n ,z n ) Representing coordinates of the nth interpolation position in a three-dimensional coordinate system, n=1, 2,3, … …; when n=1, (x) 0 ,y 0 ,z 0 ) Representing the position coordinates of the starting point; (X) To the direction of ,Y To the direction of ,Z To the direction of ) A vector in a three-dimensional coordinate system from the start point position a to the end point position B; t (T) n-1,n Representing the time (interpolation time interval) required from the (n-1) th interpolation position to the (n) th interpolation position, which can be set or acquired based on the running platform; t (T) 0 Representing a preset time interval. (X) To the direction of ,Y To the direction of ,Z To the direction of ) The position can be obtained from the coordinate position of the start point position a and the coordinate position of the start point position B.
After the position information of the user between the point A and the point B is simulated by adopting the formula, corresponding response can be carried out according to the simulated position information.
Similarly, the camera servers (1011 b, 1012) can simulate the user position information in the same manner as the virtual scene servers, and synchronize the position information obtained by the simulation to the local virtual scene clients (1011, 1012), so that the virtual scene clients (1011, 1012) can render corresponding virtual scenes according to the simulated position information, and further, the smoothness of the interactive pictures can be ensured.
The above-mentioned fig. 1 to 2 describe in detail the virtual reality interaction system of the remote scene, and the following describes in detail the virtual reality interaction method and the computer readable storage medium of the remote scene by applying the above-mentioned interaction system with reference to the accompanying drawings. In order to avoid redundancy, the terms already described above may not be repeated hereinafter.
Fig. 4 is a schematic flow chart of a first embodiment of a virtual reality interaction method applied to a remote scene according to an embodiment of the present invention. The virtual reality interaction method can be operated on the interaction system shown in fig. 1 and fig. 2, and in the embodiment of the invention, the virtual reality interaction method is described from a virtual scene server side. The virtual reality interaction method comprises the following steps:
In step 401, the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system and the operation command from each virtual scene client.
Since the virtual reality interaction method is operated in the interaction system shown in fig. 1 and 2. Therefore, the virtual scene server can receive the dynamic capture data sent by at least two dynamic capture data acquisition systems. The at least two dynamic capture data acquisition systems are located in different dynamic capture data acquisition areas, namely, the at least two dynamic capture data acquisition systems are located in different physical areas or in different places, and each dynamic capture data acquisition system at least corresponds to one local virtual scene client. The dynamic capture data are the dynamic capture data of the local user collected by the dynamic capture data collection system. For each dynamic capture data acquisition system, after local dynamic capture data is acquired, the dynamic capture data needs to be synchronized to a local virtual scene client, and the acquired dynamic capture data is transmitted to the dynamic capture data acquisition system and the virtual scene server.
The dynamic capturing data specifically may include: rigid body name, rigid body data, and rigid body identification number. The terminal equipment receiving the dynamic capture data can identify the rigid body according to the name and the identification number of the rigid body, determine the user to which the rigid body belongs, and determine the position information of the user according to the rigid body data.
After receiving the dynamic capture data collected by the two dynamic capture data collection systems, the virtual scene server can obtain the position information of all users in the virtual scene.
For each virtual scene client, on one hand, the client receives the dynamic capture data transmitted by the local dynamic capture data acquisition system, and meanwhile, the client can also receive the dynamic capture data transmitted by the local dynamic capture data acquisition system and from other dynamic capture data acquisition systems. That is, the virtual scene client can learn the location information of all users in different physical areas. That is, each client in the virtual scene is able to learn the location information of all users in the virtual scene even though the users are in different physical areas.
Therefore, the dynamic capture data acquisition system synchronizes the acquired local dynamic capture data to the local client, the dynamic capture data acquisition system of other areas and the virtual scene server, and the aims are as follows: the dynamic capture data sharing is realized, so that the dynamic capture data of all users in the virtual scene can be shared between different dynamic capture data acquisition systems and between the dynamic capture data acquisition systems and the virtual scene server, and the data sharing effect similar to that of the same local area network is achieved. Through the sharing of dynamic capturing data, the position information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the remote virtual reality interaction is ensured, and the immersion sense of the virtual reality interaction is realized. Because the dynamic capture data acquisition system adopts a parallel mode to acquire dynamic capture data, the dynamic capture data acquisition system is synchronously transmitted to a local virtual scene client, other dynamic capture data acquisition systems and a virtual scene server, the synchronization time can be shortened, the data sharing efficiency is improved, the interaction time delay is further reduced, and the interaction experience is improved.
When the dynamic capture data acquisition systems communicate, P2P communication between the dynamic capture data acquisition systems and the virtual scene server can be established. That is, before receiving the dynamic capture data and the operation command, P2P communication between the dynamic capture data acquisition systems may be established through the virtual scene server. Moreover, the dynamic capture data acquisition system of the embodiment of the invention can be an inertial dynamic capture data acquisition system, a laser dynamic capture data acquisition system, an optical dynamic capture data acquisition system or other types of dynamic capture data acquisition systems.
And step 402, the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes the response result to each virtual scene client.
After the dynamic capture data acquisition system acquires and synchronizes the dynamic capture data, the next stage of interaction process is started.
Each virtual scene client can also receive an operation command input by a local corresponding user (namely, a user corresponding to the virtual scene client), and forward the operation command to the virtual scene server. The operation command is an operation instruction of a user on a person or object in the virtual scene. Specifically, the user may input an operation command to the virtual scene client by means of a handle or an inertial gesture. Each virtual scene client, after receiving the operation command, converts the operation command into a form which can be identified by the virtual scene server and transmits the operation command to the virtual scene server. That is, the virtual scene server may learn the operation commands of all users in different physical areas in the virtual scene.
The virtual scene server has the main functions of: and controlling the normal running of the interaction logic. In order to achieve normal interaction of users in different physical areas in the same virtual scene, the virtual scene server needs to acquire position information of all users in the virtual scene and operation commands of all users. In view of the fact that these two conditions have been implemented in the foregoing description, the virtual scene server may respond accordingly according to the received operation commands of all users and the location information of all users in the virtual scene. And synchronizing the response results to each virtual scene client.
After synchronizing the response result to each virtual scene client, for each virtual scene client that receives the response result, it needs to adjust the corresponding virtual scene according to the response result. The specific adjustment mode is as follows: the virtual scene client adjusts the virtual scene according to the response result, the position information of all users in the virtual scene and the visual angle information of the local user (the user corresponding to the client), and displays the adjusted virtual scene to the user. For example, the adjusted virtual scene may be displayed to the user through a helmet worn by the user. Thus, the interaction of the users in different physical areas in the same virtual scene is completed.
It will be appreciated that the dynamic capture data acquisition system is of a plurality of types, such as laser, inertial or optical, and in the following embodiments, the dynamic capture data acquisition system will be described in detail by taking the dynamic capture data acquisition system as an optical dynamic capture data acquisition system.
Fig. 5 is a schematic flow chart of a second embodiment of a virtual reality interaction method applied to a remote scene according to an embodiment of the present invention. The virtual reality interaction method can be operated on the interaction system shown in fig. 1 and 2. In the embodiment of the invention, the virtual reality interaction method is described from the virtual scene server side. The embodiment of the present invention differs from the embodiment shown in fig. 4 in that: the dynamic capturing data acquisition system is an optical dynamic capturing acquisition system and comprises: a plurality of dynamic capture cameras and a camera server. Therefore, in this step, a manner of establishing P2P communication through the virtual scene server and a manner of receiving dynamic capture data are specifically described, which will be described in detail below.
In step 501, the virtual scene server receives a link request sent by each camera server.
Step 502, the virtual scene server extracts the IP information of the camera server from the link request.
Step 503, the virtual scene server synchronizes the extracted IP information of all camera servers to each of the camera servers in the network; so that each of the camera servers can establish P2P communication with other camera servers according to the received IP information.
In step 504, the virtual scene server receives the dynamic capture data sent by each dynamic capture data acquisition system and the operation command from each virtual scene client.
And step 505, the virtual scene server responds to the operation command according to the received dynamic capture data, and synchronizes the response result to each virtual scene client.
As can be seen from the above steps, the action of the dynamic capture camera is to collect dynamic capture data of the local user and transmit the dynamic capture data to the corresponding camera server. And the camera server functions as: P2P communication is established with camera servers of other dynamic capture data acquisition systems, and local dynamic capture data is shared to a local virtual scene client, a virtual scene server and the camera servers in other dynamic capture data acquisition systems.
It should be noted that, the manner of establishing P2P communication between the camera servers may be:
each camera server sends a link request to the virtual scene server, wherein the link request carries IP information of the camera server. The virtual scene server extracts the IP information of the camera servers from the received link request and synchronizes the extracted IP information of all the camera servers to each online camera server in the network.
For each camera server, after receiving the IP information of all camera servers from the virtual scene server, a link request is sent to other camera servers according to the received IP information to establish P2P communication.
After P2P communication is established between camera servers, sharing of dynamic capture data acquired by the dynamic capture cameras among the camera servers can be achieved, so that the camera servers can acquire dynamic capture data of other camera servers and synchronize to a local virtual scene client when interaction needs exist; therefore, each virtual scene client in different physical areas can be ensured to acquire dynamic capture data of all users in virtual scene interaction.
According to the virtual reality interaction method, the dynamic capturing data of the user are collected in a mode of combining a plurality of dynamic capturing cameras with the camera servers, the virtual scene servers are used as transit, and P2P communication between the camera servers is established, so that sharing of the dynamic capturing data is achieved, and therefore transmission of the dynamic capturing data can be prevented from being interfered by an external network. Meanwhile, after the virtual scene client receives the operation command of the user, the operation command is uploaded to the virtual scene server. The virtual scene server is used as an interaction control center, responds to the operation commands of the users according to the received operation commands of all the users and the received position information (dynamic capture data) of all the users, and transmits the response results to each virtual scene client. And each virtual scene client renders the corresponding virtual scene according to the received response result, the position information of each user and the visual angle information of the user corresponding to the client, and displays the corresponding virtual scene to the user, so that the virtual reality interaction of multiple users in different scenes in the same virtual scene is realized.
As shown in fig. 6, a schematic block diagram of a server according to an embodiment of the present invention is provided. As shown in fig. 6, the server 6 of this embodiment includes: one or more processors 60, a memory 61, and a computer program 62 stored in the memory 61 and executable on the processor 60. The processor 60, when executing the computer program 62, performs the steps of the method embodiments of data synchronization described above, for example steps S401 to S402, or steps S501 to S505.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program 62 in the server 6.
Including but not limited to a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the server 6 and is not meant to be limiting as the server 6 may include more or fewer components than shown, or may combine certain components, or different components, e.g., the server may further include input devices, output devices, network access devices, buses, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the server 6, such as a hard disk or a memory of the server 4. The memory 61 may be an external storage device of the server 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the server 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the server 6. The memory 61 is used for storing the computer program and other programs and data required by the server. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The aspects of the embodiments of the present invention, or parts of the aspects or all or part of the aspects that contribute to the prior art, may be embodied in the form of a software product stored in a storage medium, comprising instructions for causing a computer device or processor to perform all or part of the steps of the methods described in the embodiments of the present invention.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A virtual reality interaction system, the system comprising: at least two interaction subsystems and a virtual scene server; each of the interaction subsystems comprises: the dynamic capture data acquisition system and the at least one virtual scene client; wherein, a virtual scene client corresponds to a user;
the dynamic capture data acquisition system is used for acquiring dynamic capture data of a local target object and transmitting the dynamic capture data to the dynamic capture data acquisition system in the local virtual scene client, the virtual scene server and other interaction subsystems; the dynamic capture data acquisition system can share dynamic capture data of a part of time in a plurality of time to the dynamic capture data acquisition system in the virtual scene server and other interaction subsystems according to a preset time interval;
The virtual scene client is used for receiving an operation command of a local corresponding user and transmitting the operation command to the virtual scene server; the virtual scene client can acquire the position information of all users in different physical areas;
the virtual scene server is used for determining the position information of the user in the virtual scene according to the received operation command and the received dynamic capture data at a part of time, and taking the position information as terminal information; the position information of the user in the virtual scene, recorded in the virtual scene server, is used as starting point information; performing linear interpolation processing according to a starting point position, an end point position, an acquired interpolation time interval and the preset time interval to simulate the position information of a user between the starting point position and the end point position, performing corresponding response according to the position information, and synchronizing a response result to each virtual scene client;
The virtual scene client is used for adjusting the corresponding virtual scene according to the response result, the position information of all users in the virtual scene and the visual angle information of the local user, and displaying the adjusted virtual scene to the users so as to complete the interaction of the users in different physical areas under the same virtual scene;
the dynamic capturing data acquisition system is an optical dynamic capturing acquisition system and comprises: a plurality of dynamic capture cameras and a camera server;
the dynamic capture camera is used for collecting dynamic capture data of a local target object and transmitting the dynamic capture data to the camera server;
the camera server is specifically configured to establish P2P communication with camera servers in other dynamic capture data acquisition systems, synchronize the dynamic capture data to a local virtual scene client, and upload the dynamic capture data to the virtual scene server and the camera servers in other dynamic capture data acquisition systems;
when the camera server establishes P2P communication with the camera server in the other interaction subsystem, the camera server is specifically configured to:
sending a link request to the virtual scene server; the link request carries IP information of the camera server; so that the virtual scene server synchronizes the received IP information of all camera servers to each camera server in the network;
The camera server is also used for receiving the IP information of all camera servers transmitted by the virtual scene server and establishing P2P communication with other camera servers according to the IP information.
2. The virtual reality interaction system of claim 1, wherein the dynamic capture data acquisition system is further configured to establish P2P communication with dynamic capture data acquisition systems in other interaction subsystems.
3. The virtual reality interaction system of claim 1, wherein the dynamic capture data comprises: rigid body name, rigid body data, and rigid body identification number.
4. The virtual reality interaction system of claim 1, wherein the preset time intervals may or may not be equal.
5. The virtual reality interaction system of any of claims 1-4, wherein the camera server is specifically configured to determine location information of a user in a virtual scene according to dynamic capture data received at a current moment, and take the location information as endpoint information; and taking the current position information of the user recorded in the virtual scene server as starting point information; and performing linear interpolation processing according to the starting point position, the ending point position, the acquired interpolation time interval and the preset time interval to simulate the position information of the user between the starting point position and the ending point position, and synchronizing the position information to a local virtual scene client.
6. A virtual reality interaction method, characterized in that the method is applied to a virtual reality interaction system, the method comprising:
the virtual scene server receives dynamic capture data sent by each dynamic capture data acquisition system and receives an operation command from each virtual scene client; the dynamic capture data acquisition system comprises at least two dynamic capture data acquisition systems, wherein each dynamic capture data acquisition system at least corresponds to a local virtual scene client, and the virtual scene client can acquire the position information of all users in different physical areas; the dynamic capture data acquisition system particularly shares dynamic capture data of a part of time points in a plurality of time points to the dynamic capture data acquisition system in the virtual scene server and other interaction subsystems according to a preset time interval;
the virtual scene server determines the position information of the user in the virtual scene according to the received operation command and the received dynamic capture data at a part of time, and takes the position information as terminal information; the position information of the user in the virtual scene, recorded in the virtual scene server, is used as starting point information; performing linear interpolation processing according to a starting point position, an end point position, an acquired interpolation time interval and the preset time interval to simulate the position information of a user between the starting point position and the end point position, performing corresponding response according to the position information, and synchronizing a response result to each virtual scene client; the virtual scene client can adjust corresponding virtual scenes according to the response result, the position information of all users in the virtual scenes and the visual angle information of the local user, and the adjusted virtual scenes are displayed to the users so as to complete interaction of the users in different physical areas under the same virtual scene:
The dynamic capturing data acquisition system is an optical dynamic capturing acquisition system and comprises: a plurality of dynamic capture cameras and a camera server;
the dynamic capture camera is used for collecting dynamic capture data of a local target object and transmitting the dynamic capture data to the camera server;
the camera server is specifically configured to establish P2P communication with camera servers in other dynamic capture data acquisition systems, synchronize the dynamic capture data to a local virtual scene client, and upload the dynamic capture data to the virtual scene server and the camera servers in other dynamic capture data acquisition systems;
when the camera server establishes P2P communication with the camera server in the other interaction subsystem, the camera server is specifically configured to:
sending a link request to the virtual scene server; the link request carries IP information of the camera server; so that the virtual scene server synchronizes the received IP information of all camera servers to each camera server in the network;
the camera server is also used for receiving the IP information of all camera servers transmitted by the virtual scene server and establishing P2P communication with other camera servers according to the IP information.
7. A computer readable storage medium storing a computer program, which when executed by a processor performs the steps of the method according to claim 6.
8. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to claim 6 when executing the computer program.
CN202210083807.0A 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium Active CN114527872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210083807.0A CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202210083807.0A CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium
CN201780000973.7A CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium
PCT/CN2017/099011 WO2019037074A1 (en) 2017-08-25 2017-08-25 Virtual reality interaction system and method, and computer storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201780000973.7A Division CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Publications (2)

Publication Number Publication Date
CN114527872A CN114527872A (en) 2022-05-24
CN114527872B true CN114527872B (en) 2024-03-08

Family

ID=65205393

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210083807.0A Active CN114527872B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium
CN201780000973.7A Active CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201780000973.7A Active CN109313484B (en) 2017-08-25 2017-08-25 Virtual reality interaction system, method and computer storage medium

Country Status (2)

Country Link
CN (2) CN114527872B (en)
WO (1) WO2019037074A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110108159B (en) * 2019-06-03 2024-05-17 武汉灏存科技有限公司 Simulation system and method for large-space multi-person interaction
CN110471772B (en) * 2019-08-19 2022-03-15 上海云绅智能科技有限公司 Distributed system, rendering method thereof and client
CN110609622A (en) * 2019-09-18 2019-12-24 深圳市瑞立视多媒体科技有限公司 Method, system and medium for realizing multi-person interaction by combining 3D and virtual reality technology
CN110610547B (en) * 2019-09-18 2024-02-13 瑞立视多媒体科技(北京)有限公司 Cabin practical training method, system and storage medium based on virtual reality
CN110989837B (en) * 2019-11-29 2023-03-24 上海海事大学 Virtual reality system for passenger liner experience
CN111047710B (en) * 2019-12-03 2023-12-26 深圳市未来感知科技有限公司 Virtual reality system, interactive device display method, and computer-readable storage medium
CN111338481B (en) * 2020-02-28 2023-06-23 武汉灏存科技有限公司 Data interaction system and method based on whole body dynamic capture
CN111381792B (en) * 2020-03-12 2023-06-02 上海曼恒数字技术股份有限公司 Virtual reality data transmission method and system supporting multi-user cooperation
CN112423020B (en) * 2020-05-07 2022-12-27 上海哔哩哔哩科技有限公司 Motion capture data distribution and acquisition method and system
CN111796670A (en) * 2020-05-19 2020-10-20 北京北建大科技有限公司 Large-space multi-person virtual reality interaction system and method
CN111988375B (en) * 2020-08-04 2023-10-27 瑞立视多媒体科技(北京)有限公司 Terminal positioning method, device, equipment and storage medium
CN112130660B (en) 2020-08-14 2024-03-15 青岛小鸟看看科技有限公司 Interaction method and system based on virtual reality all-in-one machine
CN112150246A (en) * 2020-09-25 2020-12-29 刘伟 3D data acquisition system and application thereof
CN112256125B (en) * 2020-10-19 2022-09-13 中国电子科技集团公司第二十八研究所 Laser-based large-space positioning and optical-inertial-motion complementary motion capture system and method
CN114051148A (en) * 2021-11-10 2022-02-15 拓胜(北京)科技发展有限公司 Virtual anchor generation method and device and electronic equipment
CN115114537B (en) * 2022-08-29 2022-11-22 成都航空职业技术学院 Interactive virtual teaching aid implementation method based on file content identification

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090043192A (en) * 2007-10-29 2009-05-06 (주)인텔리안시스템즈 Remote controlling system and method of operating the system
KR20130095904A (en) * 2012-02-21 2013-08-29 (주)드리밍텍 Virtual environment management system and server thereof
CN105450736A (en) * 2015-11-12 2016-03-30 小米科技有限责任公司 Method and device for establishing connection with virtual reality
CN105892686A (en) * 2016-05-05 2016-08-24 刘昊 3D virtual-real broadcast interaction method and 3D virtual-real broadcast interaction system
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam
CN107024995A (en) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 Many people's virtual reality interactive systems and its control method

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8730156B2 (en) * 2010-03-05 2014-05-20 Sony Computer Entertainment America Llc Maintaining multiple views on a shared stable virtual space
US20110035684A1 (en) * 2007-04-17 2011-02-10 Bell Helicoper Textron Inc. Collaborative Virtual Reality System Using Multiple Motion Capture Systems and Multiple Interactive Clients
CN103929479B (en) * 2014-04-10 2017-12-12 惠州Tcl移动通信有限公司 Mobile terminal simulation of real scenes realizes the method and system of user interaction
US10007334B2 (en) * 2014-11-13 2018-06-26 Utherverse Digital Inc. System, method and apparatus of simulating physics in a virtual environment
CN104469442A (en) * 2014-11-21 2015-03-25 天津思博科科技发展有限公司 Device for achieving collective singing through intelligent terminal
US9769536B2 (en) * 2014-12-26 2017-09-19 System73, Inc. Method and system for adaptive virtual broadcasting of digital content
CN104866101B (en) * 2015-05-27 2018-04-27 世优(北京)科技有限公司 The real-time interactive control method and device of virtual objects
CN105323129B (en) * 2015-12-04 2019-02-12 上海弥山多媒体科技有限公司 A kind of family's virtual reality entertainment systems
CN106125903B (en) * 2016-04-24 2021-11-16 林云帆 Multi-person interaction system and method
CN106534125B (en) * 2016-11-11 2021-05-04 厦门汇鑫元软件有限公司 Method for realizing VR multi-person interactive system based on local area network
CN106774949A (en) * 2017-03-09 2017-05-31 北京神州四达科技有限公司 Collaborative simulation exchange method, device and system
CN106843507B (en) * 2017-03-24 2024-01-05 苏州创捷传媒展览股份有限公司 Virtual reality multi-person interaction method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090043192A (en) * 2007-10-29 2009-05-06 (주)인텔리안시스템즈 Remote controlling system and method of operating the system
KR20130095904A (en) * 2012-02-21 2013-08-29 (주)드리밍텍 Virtual environment management system and server thereof
CN105450736A (en) * 2015-11-12 2016-03-30 小米科技有限责任公司 Method and device for establishing connection with virtual reality
CN105892686A (en) * 2016-05-05 2016-08-24 刘昊 3D virtual-real broadcast interaction method and 3D virtual-real broadcast interaction system
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN107024995A (en) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 Many people's virtual reality interactive systems and its control method

Also Published As

Publication number Publication date
CN109313484A (en) 2019-02-05
CN109313484B (en) 2022-02-01
CN114527872A (en) 2022-05-24
WO2019037074A1 (en) 2019-02-28

Similar Documents

Publication Publication Date Title
CN114527872B (en) Virtual reality interaction system, method and computer storage medium
JP6918455B2 (en) Image processing equipment, image processing methods and programs
KR102105189B1 (en) Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object
US11380078B2 (en) 3-D reconstruction using augmented reality frameworks
US10977869B2 (en) Interactive method and augmented reality system
WO2019111817A1 (en) Generating device, generating method, and program
CN105429989A (en) Simulative tourism method and system for virtual reality equipment
KR20120086795A (en) Augmented reality system and method that share augmented reality service to remote
JP2013061937A (en) Combined stereo camera and stereo display interaction
CN111627116A (en) Image rendering control method and device and server
KR101329935B1 (en) Augmented reality system and method that share augmented reality service to remote using different marker
WO2019085829A1 (en) Method and apparatus for processing control system, and storage medium and electronic apparatus
US20220067974A1 (en) Cloud-Based Camera Calibration
CN104113748A (en) 3D shooting system and implementation method
Lan et al. Development of a virtual reality teleconference system using distributed depth sensors
CN107479701B (en) Virtual reality interaction method, device and system
GB2612418A (en) Rendering image content
CN109360277B (en) Virtual simulation display control method and device, storage medium and electronic device
JP2019103126A (en) Camera system, camera control device, camera control method, and program
CN111562841B (en) Off-site online method, device, equipment and storage medium of virtual reality system
KR101649754B1 (en) Control signal transmitting method in distributed system for multiview cameras and distributed system for multiview cameras
JP6149967B1 (en) Video distribution server, video output device, video distribution system, and video distribution method
CN113515187B (en) Virtual reality scene generation method and network side equipment
KR102308347B1 (en) Synchronization device for camera and synchronization method for camera
JP2022114626A (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant