WO2019037074A1 - 虚拟现实交互系统、方法及计算机存储介质 - Google Patents

虚拟现实交互系统、方法及计算机存储介质 Download PDF

Info

Publication number
WO2019037074A1
WO2019037074A1 PCT/CN2017/099011 CN2017099011W WO2019037074A1 WO 2019037074 A1 WO2019037074 A1 WO 2019037074A1 CN 2017099011 W CN2017099011 W CN 2017099011W WO 2019037074 A1 WO2019037074 A1 WO 2019037074A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion capture
capture data
virtual scene
server
camera
Prior art date
Application number
PCT/CN2017/099011
Other languages
English (en)
French (fr)
Inventor
崔永太
谢冰
肖乐天
陈明洋
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Priority to PCT/CN2017/099011 priority Critical patent/WO2019037074A1/zh
Priority to CN201780000973.7A priority patent/CN109313484B/zh
Priority to CN202210083807.0A priority patent/CN114527872B/zh
Publication of WO2019037074A1 publication Critical patent/WO2019037074A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention belongs to the field of virtual reality interaction technologies, and in particular, to a virtual interaction system, method and computer storage medium for an off-site scenario.
  • the process of virtual reality interaction is generally: acquiring the user's motion capture data (three-dimensional space location), and then transmitting the motion capture data to the server of the virtual scene.
  • the server determines the location information of the user in the virtual scenario according to the motion capture data, thereby performing corresponding interaction response, and synchronously displaying the response result to the user.
  • the data of the capture data can be collected in various ways, such as inertial motion capture, laser motion capture or optical motion capture.
  • multiple motion capture cameras in the optical motion capture system can be used to identify the optical marker points attached to the observed object, and the image capture system of the camera can be used to calculate and calculate the marker points.
  • the coordinate position information (ie, the data is captured) is then transmitted to the server of the camera for capture via a network (wired, wireless, USB, etc.).
  • the camera server receives the coordinate position information of the automatic camera, identifies the observed object according to the position coordinate information, acquires the position information of the user in the physical scene, and then sends the location information in the physical scene to the server and the client of the virtual scene.
  • the server of the virtual scenario maps the location information to the virtual scenario, thereby determining the location information of the user in the virtual scenario, and displaying the information to the user through the client of the virtual scenario.
  • the trend of the data to be captured is specifically: the virtual scene server 31 and the virtual scene client 32 respectively obtain corresponding motion capture data from the optical motion capture system 33. Since the communication mode and the synchronization mode of the server 31, the client 32, and the optical capture system 33 of the virtual scene are all developed based on the local area network, the current communication mode of the system can realize virtual reality interaction in the same physical space.
  • the present invention provides a synchronous virtual reality interaction system, which can implement interaction of different users in an off-site scenario in the same virtual scene.
  • a first aspect of the embodiments of the present invention provides a virtual reality interaction system, where the system includes: at least two interaction subsystems, and a virtual scene server; the virtual scene server runs on a wide area network; and the interaction subsystem includes: Capture a data collection system and at least one virtual scene client;
  • the motion capture data collection system is configured to collect local motion capture data, and send the motion capture data to the local motion capture data in the virtual scene client, the virtual scene server, and other interaction subsystems. Acquisition System;
  • the virtual scene client is configured to receive an operation command of a local corresponding user, and transmit the operation command to the virtual scene server; and receive the local capture data sent by the local motion capture data collection system, and Locally acquired motion capture data from other motion capture data acquisition systems;
  • the virtual scene server is configured to respond according to the received operation command sent by all the virtual scene clients and the motion capture data sent by all the captured data collection systems, and synchronize the response result to each Virtual scene client;
  • the virtual scene client is configured to adjust according to the response result, the dynamic capture data collected by the local motion capture data acquisition system, and the motion capture data from other motion capture data acquisition systems transmitted by the local motion capture data acquisition system. Corresponding virtual scenes and displaying the adjusted virtual scenes to the user.
  • a second aspect of the embodiments of the present invention provides a virtual reality interaction method, where the method includes:
  • the virtual scene server receives the motion capture data sent by each of the motion capture data collection systems, and the operation command from each virtual scene client;
  • the motion capture data collection system includes at least two and each of the motion capture data acquisition systems at least Corresponding to a local virtual scene client;
  • the virtual scene server responds to the operation command according to the received motion capture data, and synchronizes the response result to each of the virtual scene clients; so that the virtual scene client can be locally based on the response result.
  • the motion capture data collected by the motion capture data acquisition system and the motion capture data from other motion capture data acquisition systems from the local motion capture data acquisition system adjust the corresponding virtual scene and display the adjusted virtual scene to the user. .
  • a third aspect of an embodiment of the present invention provides a server including a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes the computer
  • the program implements the steps of the virtual reality interaction method described in any of the above.
  • a fourth aspect of the embodiments of the present invention provides the computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the steps of the virtual reality interaction method according to any one of the above.
  • the virtual scene client after receiving the operation command of the user, the virtual scene client uploads the operation command to the virtual scene server.
  • the virtual scene server serves as the control center, and according to the received operation commands of all users and the location information of all users (the data of the capture), the operation command of the user is correspondingly sent and the response result is delivered to each virtual scene client.
  • Each virtual scene client performs rendering of the corresponding virtual scene according to the received response result, the viewing angle information of the user corresponding to each user and the client, and displays the corresponding virtual scene to the user, thereby implementing multiple users in the same virtual scene in the same virtual scene. Virtual reality interaction.
  • FIG. 1 is a schematic flow chart of a first embodiment of a virtual reality interaction system provided by the present invention
  • FIG. 2 is a schematic flowchart of a second embodiment of a virtual reality interaction system provided by the present invention.
  • FIG. 3 is a schematic flow diagram of data of motion capture data in a virtual reality interaction in the prior art
  • FIG. 4 is a schematic flowchart diagram of a first embodiment of a virtual reality interaction method provided by the present invention.
  • FIG. 5 is a schematic flowchart diagram of a second embodiment of a virtual reality interaction method provided by the present invention.
  • FIG. 6 is a schematic block diagram of an embodiment of a server provided by the present invention.
  • the term “if” can be interpreted as “when” or “on” or “in response to determining” or “in response to detecting” depending on the context. .
  • the phrase “if determined” or “if detected [condition or event described]” may be interpreted in context to mean “once determined” or “in response to determining” or “once detected [condition or event described] ] or “in response to detecting [conditions or events described]”.
  • the virtual reality interaction scheme of the embodiment of the present invention is applicable to the virtual reality interaction of the remote scene, that is, the users in different physical areas can implement interaction in the same virtual scene.
  • users in different physical areas are also called: remote users, users in different scenarios, or users under different mobile systems.
  • the virtual reality interaction system is used to implement interaction between users in different physical areas in the same virtual scenario, including: at least two interaction subsystems, and a virtual scene server.
  • Each interaction subsystem includes: a motion capture data acquisition system and at least one virtual scene client.
  • the interaction system 100 includes two interaction subsystems, and each interaction subsystem includes a virtual scene client as an example for detailed description.
  • the virtual scene client corresponds to one user, and the virtual scene client can receive an operation command input by the virtual scene client corresponding to the user.
  • FIG. 1 it is a structural block diagram of a first embodiment of a virtual reality interaction system provided by the present invention.
  • the interaction system 100 is configured to implement interaction between users in different physical areas in the same virtual scenario, including: A motion capture data acquisition system 1011 of an area 11 and a virtual scene client 1012 located in the same area as the motion capture data collection system 1011.
  • the first area 11 and the second area 12 are different physical areas.
  • the motion capture data acquisition system 1011 collects the first motion capture data of the local target object (the target object includes the user or other objects such as a game gun, and the target object is specifically described as an example), and the The first motion capture data is synchronized to the virtual scene client 1012, and the captured data acquisition system 1011 also transmits the collected first motion capture data to the motion capture data collection system 1021 and the virtual scene server 103.
  • the motion capture data acquisition system 1021 collects second motion data of a local target object (the target object includes a user or other object such as a game gun, and the target object is specifically described as an example), and The second motion capture data is synchronized to the client 1022, and the motion capture data acquisition system 1021 also transmits the collected second motion capture data to the motion capture data collection system 1011 and the virtual scene server 103.
  • the target object includes a user or other object such as a game gun, and the target object is specifically described as an example
  • the second motion capture data is synchronized to the client 1022, and the motion capture data acquisition system 1021 also transmits the collected second motion capture data to the motion capture data collection system 1011 and the virtual scene server 103.
  • the motion capture data may specifically include: a rigid body name, a rigid body data, and a rigid body identification number.
  • the terminal device that receives the motion capture data can identify the rigid body according to the rigid body name and the rigid body identification number, determine the user to which the rigid body belongs, and can also determine the location information of the user according to the rigid body data.
  • the virtual scene server 103 After receiving the first motion capture data collected by the motion capture data collection system 1011 and the second motion capture data collected by the motion capture data collection system 1021, the virtual scene server 103 can obtain location information of all users in the virtual scene. .
  • the virtual scene client 1012 receives the first motion capture data transmitted by the local motion capture data acquisition system 1011, and can also receive the local capture data acquisition system 1011 to automatically capture the data acquisition system 1021.
  • the second motion capture data That is to say, the virtual scene client 1012 can know the location information of all users in different physical areas.
  • the virtual scene client 1022 receives the second motion capture data from the local motion capture data collection system 1021, and can also receive the data collected by the local motion capture data acquisition system 1021 to automatically capture data.
  • the dynamic capture data acquisition system synchronizes the collected data of the collected user to the local client, the dynamic capture data acquisition system of other regions, and the virtual scene server for the purpose of realizing the sharing of the captured data, so that different
  • the data acquisition system between the dynamic capture data acquisition system and the virtual scene server can share the data of all users in the virtual scene to achieve data sharing effect similar to that in the same local area network.
  • the location information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the virtual reality interaction in different places can be ensured, and the immersion of the virtual reality interaction can be realized.
  • the dynamic capture data acquisition system simultaneously collects the captured data in a parallel manner to the local virtual scene client, other dynamic data acquisition systems, and virtual scene servers, the synchronization time can be reduced, the data sharing efficiency can be improved, and the data can be reduced. Interactive delays enhance the interactive experience.
  • the P2P communication mode may be selected.
  • the motion capture data acquisition system 1011 and the motion capture data acquisition system 1021 respectively send a link request to the virtual scenario server 103, and request a link. Carry your own IP information.
  • the virtual scene server 103 extracts the IP information therein and synchronizes all the extracted IP information to the currently online motion capture data collection system (1011, 1021).
  • each of the motion capture data acquisition systems can establish a P2P communication connection with other motion capture data acquisition systems.
  • the motion capture data acquisition system of the embodiment of the present invention may be a inertial motion capture data acquisition system, a laser motion capture data acquisition system, an optical motion capture data acquisition system, or other types of motion capture data acquisition systems.
  • the motion capture data acquisition system collects and synchronizes the motion capture data, it enters the next phase of the interaction process.
  • the virtual scene client 1012 can also receive an operation command input by the local corresponding user (that is, the user corresponding to the virtual scene client 1012), and forward the operation command to the virtual scene server 103.
  • the virtual scene client 1022 can also receive an operation command input by the local corresponding user (the user corresponding to the virtual scene client 1022), and forward the operation command to the virtual scene server 103.
  • the operation command is an operation instruction of the user to a person or an object in the virtual scene.
  • the user can input an operation command to the virtual scene client by means of a handle or an inertial gesture.
  • each virtual scene client converts the operation command into a form recognizable by the virtual field server 103 and transmits it to the virtual scene server 103. That is to say, the virtual scene server 103 can learn the operation commands of all users in different physical areas in the virtual scene.
  • the main function of the virtual scene server 103 is to control the normal progress of the interaction logic. To achieve different physics The normal interaction of users of the area in the same virtual scene, the virtual scene server 103 needs to obtain location information of all users in the virtual scene and operation commands of all users. Since these two conditions have been implemented in the foregoing description, the virtual scene server 103 can respond accordingly according to the received operation commands of all users and the location information of all users in the virtual scene. The response results are synchronized to each virtual scene client, such as to virtual scene client 1012 and virtual scene client 1022.
  • the response result is synchronized to the virtual scene client 1012 and the virtual scene client 1022, for each virtual scene client that receives the response result, it needs to perform adjustment of the corresponding virtual scene according to the response result.
  • the specific adjustment manner is: the virtual scene client according to the response result and the location information of all users in the virtual scene (ie, the first motion capture data and the second motion capture data), and the local user (the user corresponding to the client)
  • the view information adjusts the virtual scene and displays the adjusted virtual scene to the user. For example, the adjusted virtual scene can be displayed to the user through the helmet worn by the user. This completes the interaction of users in different physical areas in the same virtual scene.
  • motion capture data acquisition systems such as laser, inertia or optical.
  • the dynamic capture data acquisition system is taken as an example of an optical motion capture data acquisition system.
  • FIG. 2 is a structural block diagram of a second embodiment of the virtual reality interaction system provided by the present invention.
  • the difference between the embodiment of the present invention and the embodiment shown in FIG. 1 is that the embodiment of the present invention is applicable to the dynamic capture data acquisition system.
  • the structure has been embodied.
  • the motion capture data acquisition system of the embodiment of the present invention is specifically an optical motion capture acquisition system.
  • Each optical motion capture acquisition system includes a camera server and a plurality of motion capture cameras. Hereinafter, it will be specifically described.
  • the motion capture data acquisition system 1011 specifically includes a plurality of motion capture cameras 1011a and a camera server 1011b.
  • the motion capture data acquisition system 1021 specifically includes a plurality of motion capture cameras 1021a and a camera server 1021b.
  • the function of the camera is to collect the local user's motion capture data and transmit it to the corresponding camera server.
  • the plurality of motion capture cameras 1011a are configured to collect the first motion capture data of the local user and transmit the data to the camera server 1011b.
  • the plurality of motion capture cameras 1021a are configured to collect the second motion capture data of the local user and transmit the data to the camera server 1021b.
  • the role of the camera server is: establishing P2P communication with other camera servers of the data acquisition system, sharing local motion capture data to local virtual scene clients, virtual scene servers, and other cameras in the data acquisition system.
  • server Specifically, the camera server 1011b is configured to establish P2P communication with the camera server 1021b, and simultaneously transmit the first motion capture data collected by the camera 10111a to the local virtual scene client 1012, the virtual scene server 103, and the camera server 1021b.
  • the camera server 1021b is configured to establish P2P communication with the camera server 1011b, and simultaneously transmit the second motion capture data collected by the camera 1011a to the local virtual scene client 1022, the virtual scene server 103, and the camera server 1011b.
  • the camera server 1021b sends a link request to the virtual scene server 103, where the link request carries the IP information of the camera server 1021b; the camera server 1011b also sends a link request to the virtual scene server 103, where the link request carries IP information of the camera server 1011b.
  • the virtual scene server 103 synchronizes the received IP information of the camera server 1021b and the IP information of the camera server 1011b to the online camera servers (i.e., the camera server 1021b and the camera server 1011b) in the network.
  • a connection request is initiated to the camera server 1011b based on the IP information of the camera server 1011b to establish P2P communication.
  • a connection request is initiated to the camera server 1021b based on the IP information of the camera server 1021b to establish P2P communication.
  • the sharing of the captured data collected by the camera (1011a, 1021a) between the camera servers can be realized, so that the camera server 1021b can know the motion capture data of the camera server 1011b, and The interaction is synchronized to the local virtual scene client 1022 as needed; likewise, the camera server 1011b can learn the camera data of the camera server 1021b and synchronize to the local virtual scene client 1012 when the interaction is needed. In this way, each virtual scene client in different physical areas can obtain the data of all users in the virtual scene interaction.
  • the virtual reality interaction system of the embodiment of the invention uses a combination of a plurality of camera and a camera server to collect the user's motion capture data, and establishes P2P communication between the camera servers through the virtual scene server as a relay to realize the P2P communication between the camera servers.
  • the sharing of data is captured, which ensures that the transmission of the captured data is not interfered by the external network.
  • the operation command is uploaded to the virtual scene server.
  • the virtual scene server acts as an interaction control center, and responds to the user's operation command according to the received operation commands of all the users and the location information of all the users (the data of the movement), and delivers the response result to each virtual scene client.
  • Each virtual scene client performs rendering of the corresponding virtual scene according to the received response result, the location information of each user, and the perspective information of the user corresponding to the client, and displays it to the user, thereby implementing multiple users in the off-site scenario. Virtual reality interaction of the same virtual scene.
  • the motion capture data collected by the motion capture data acquisition system is also continuous, that is, the motion capture data includes the motion capture data at multiple moments. After the motion capture data acquisition system collects the motion capture data, it is usually necessary to synchronize the motion capture data collected at multiple moments to ensure the integrity of the motion capture data.
  • the interaction system of the first embodiment or the second embodiment of the present invention when the amount of data of the captured data is too large, it is not suitable to share all of them, and the sharing mode of the captured data can be optimized at this time. After optimization, not only can the network load be reduced, but also the phenomenon of teleportation or jamming during virtual reality interaction can be avoided. In the following, the optimization method will be specifically described.
  • the motion capture data acquisition system performs motion capture data sharing, such as when the camera server (1011b, 1021b) is sharing data, and the camera server (1011b, 1021b) shares the motion capture data.
  • the virtual scene server 103 all the captured data collected by the camera is not shared, that is, only a part is shared.
  • the motion capture camera collects five motion capture data at times T1, T2, T3, T4, and T5. Then, you can consider sorting out some of the 5 pieces of motion data according to the preset time interval (such as the motion capture data at time T2 and the motion capture data at time T5) and only share the selected motion capture data, so that the network load can be alleviated. .
  • the preset time intervals may or may not be equal.
  • Sharing only part of the data is less than the network burden, but it brings the problem of interactive picture jam or teleport.
  • the virtual scene server and the camera server side perform linear interpolation processing on the received motion capture data to simulate the motion capture data at the time of no uploading, and perform screen rendering according to the simulated motion capture data, and further Avoid the problem of jamming or teleporting of interactive pictures during the interaction.
  • the virtual scene server 103 needs to perform linear difference processing when receiving the selected partial motion capture data, and the specific linear difference processing method is specifically:
  • the virtual scene server 103 determines that the location information of the user in the virtual scene is point B based on the motion capture data at time T2. At the same time, the virtual scene server also checks the location information of the user currently recorded in the virtual scene as point A. Then, linear interpolation processing is performed according to points A and B, the acquired interpolation time interval, and a preset time interval (time difference between T2 and T5) to ensure that the virtual scene server 103 is just right when the user moves from point A to point B. Can receive the data of the capture at time T5.
  • the manner in which the virtual scene server 103 performs linear interpolation processing according to points A and B, the acquired interpolation time interval, and the preset time interval (time difference between T2 and T5) may be:
  • the interpolation data between point A and point B is calculated according to the following formula, that is, simulated The user Location information:
  • x n x n-1 + (X direction ⁇ T n-1, n ) / T 0
  • y n y n-1 + (Y direction ⁇ T n-1, n ) / T 0
  • the corresponding response can be performed according to the simulated position information.
  • the camera server (1011b, 1021b) can also simulate the user location information in the same manner as the virtual scene server, and synchronize the simulated location information to the local virtual scene client (1011, 1012) for the virtual scene.
  • the client (1011, 1012) can perform rendering of the corresponding virtual scene according to the simulated location information, thereby ensuring the smoothness of the interactive picture.
  • the virtual reality interaction system of the remote scene is described in detail in the above-mentioned FIG. 1 to FIG. 2 .
  • the virtual reality interaction method and the computer readable storage medium for performing the remote scene application using the above interaction system will be described in detail below with reference to the accompanying drawings. To avoid redundancy, the terms already described above may not be repeatedly described below.
  • FIG. 4 it is a schematic flowchart of a first embodiment of a virtual reality interaction method applied to an off-site scenario according to an embodiment of the present invention.
  • the virtual reality interaction method can be run on the interactive system shown in FIG. 1 and FIG. 2 .
  • the virtual reality interaction method is described from the virtual scene server side.
  • the virtual reality interaction method includes the following steps:
  • Step 401 The virtual scene server receives the motion capture data sent by each of the motion capture data collection systems, and the operation commands from each virtual scene client.
  • the virtual scene server can receive the motion capture data sent by at least two motion capture data acquisition systems.
  • the at least two motion capture data acquisition systems are located in different motion capture data acquisition regions, that is, at least two motion capture data acquisition systems are located in different physical regions or in different locations, and each of the motion capture data acquisition systems corresponds to at least one local virtual scene client. end.
  • the motion capture data is the motion capture data of the local users collected by the data acquisition system. For each motion capture data acquisition system, local acquisition After the data is captured, the motion capture data needs to be synchronized to the local virtual scene client, and the collected motion capture data is also transmitted to the dynamic capture data acquisition system and the virtual scene server.
  • the motion capture data may specifically include: a rigid body name, a rigid body data, and a rigid body identification number.
  • the terminal device that receives the motion capture data can identify the rigid body according to the rigid body name and the rigid body identification number, determine the user to which the rigid body belongs, and can also determine the location information of the user according to the rigid body data.
  • the location information of all users in the virtual scenario can be obtained.
  • each virtual scene client receives the motion capture data from the local motion capture data acquisition system, and also receives the local motion capture system from other motion capture data acquisition systems. data. That is to say, the virtual scene client can know the location information of all users in different physical areas. That is, even if the user is in a different physical area, each client in the virtual scene can know the location information of all users in the virtual scene.
  • the motion capture data acquisition system synchronizes the collected local motion capture data to the local client, the other region's motion capture data acquisition system, and the virtual scene server.
  • the purpose is to realize the sharing of the dynamic capture data, so that different
  • the data acquisition system between the dynamic capture data, and the data acquisition system and the virtual scene server can share the data of all users in the virtual scene to achieve data sharing effect similar to that in the same local area network.
  • the location information of each user in the virtual scene in the virtual interaction can be determined, the normal logic of the virtual reality interaction in different places can be ensured, and the immersion of the virtual reality interaction can be realized.
  • the dynamic capture data acquisition system uses the collected data in a parallel manner and simultaneously synchronizes to the local virtual scene client, other dynamic data acquisition systems, and virtual scene servers, the synchronization time can be reduced, and the data sharing efficiency can be improved. Reduce interaction delays and enhance the interactive experience.
  • the P2P communication between the two can be established through the virtual scenario server. That is to say, before receiving the motion capture data and the operation command, the P2P communication between the motion capture data acquisition systems can be established through the virtual scene server.
  • the motion capture data acquisition system of the embodiment of the present invention may be a inertial motion capture data acquisition system, a laser motion capture data acquisition system, an optical motion capture data acquisition system, or other types of motion capture data acquisition systems.
  • Step 402 The virtual scene server responds to the operation command according to the received motion capture data, and synchronizes the response result to each of the virtual scene clients.
  • the motion capture data acquisition system collects and synchronizes the motion capture data, it enters the next phase of the interaction process.
  • the operation command input by the local corresponding user (that is, the user corresponding to the virtual scene client) is received by the virtual scene client, and the operation command is forwarded to the virtual scene server.
  • the operation command is an operation instruction of the user to a person or an object in the virtual scene.
  • the user can approach the virtual scene client by means of a handle or an inertial gesture. Enter the operation command.
  • each virtual scene client converts the operation command into a form that the virtual field server can recognize and transmits to the virtual scene server. That is to say, the virtual scene server can learn the operation commands of all users in different physical areas in the virtual scene.
  • the main function of the virtual scene server is to control the normal execution of the interaction logic.
  • the virtual scenario server needs to obtain location information of all users in the virtual scenario and operation commands of all users. Since these two conditions have been implemented in the foregoing description, the virtual scene server can respond accordingly according to the received operation commands of all users and the location information of all users in the virtual scene. And synchronize the response to each virtual scene client.
  • the specific adjustment mode is: the virtual scene client adjusts the virtual scene according to the response result and the location information of all users in the virtual scene, and the perspective information of the local user (the user corresponding to the client), and the adjusted virtual The scene is displayed to the user.
  • the adjusted virtual scene can be displayed to the user through the helmet worn by the user. This completes the interaction of users in different physical areas in the same virtual scene.
  • motion capture data acquisition systems such as laser, inertia or optical.
  • the dynamic capture data acquisition system is taken as an example of an optical motion capture data acquisition system.
  • FIG. 5 it is a schematic flowchart of a second embodiment of a virtual reality interaction method applied to an off-site scenario according to an embodiment of the present invention.
  • the virtual reality interaction method can be run on the interactive system shown in FIG. 1 and FIG. 2.
  • the virtual reality interaction method is described from the virtual scene server side.
  • the motion capture data acquisition system is an optical motion capture acquisition system, including: a plurality of motion capture cameras and a camera server. Therefore, in this step, the manner of establishing P2P communication through the virtual scene server and the manner of receiving the data to be captured are specifically described, which will be described in detail below.
  • Step 501 The virtual scene server receives a link request sent by each of the camera servers.
  • Step 502 The virtual scene server extracts IP information of the camera server from the link request.
  • Step 503 The virtual scene server synchronizes the extracted IP information of all the camera servers to each of the camera servers in the network; so that each of the camera servers can establish P2P communication with other camera servers according to the received IP information.
  • Step 504 The virtual scene server receives the motion capture data sent by each of the motion capture data collection systems, and the operation commands from each virtual scene client.
  • Step 505 The virtual scene server responds to the operation command according to the received motion capture data, and Synchronizing the response results to each of the virtual scene clients.
  • the function of the camera is to collect the local user's motion capture data and transmit it to the corresponding camera server.
  • the role of the camera server is to establish P2P communication with other camera servers of the data acquisition system, and to share local motion capture data to local virtual scene clients, virtual scene servers, and other camera servers in the data acquisition system. .
  • Each camera server sends a link request to the virtual scene server, where the link request carries the IP information of the camera server.
  • the virtual scene server extracts the IP information of the camera server from the received link request, and synchronizes the extracted IP information of all camera servers to each online camera server in the network.
  • a link request is sent to other camera servers based on the received IP information to establish P2P communication.
  • the capture data collected by the camera can be shared between the camera servers, so that the camera server can know the data of the remaining camera servers and synchronize to the needs of the interaction.
  • the local virtual scene client can ensure that each virtual scene client in different physical areas can acquire the data of all users in the virtual scene interaction.
  • a plurality of motion capture cameras and a camera server are combined to collect the user's motion capture data, and the virtual scene server is used as a relay to establish P2P communication between the camera servers.
  • the sharing of data is captured, which ensures that the transmission of the captured data is not interfered by the external network.
  • the operation command is uploaded to the virtual scene server.
  • the virtual scene server acts as an interaction control center, and responds to the user's operation command according to the received operation commands of all the users and the location information of all the users (the data of the movement), and delivers the response result to each virtual scene client.
  • Each virtual scene client performs rendering of the corresponding virtual scene according to the received response result, the location information of each user, and the perspective information of the user corresponding to the client, and displays it to the user, thereby implementing multiple users in the off-site scenario. Virtual reality interaction of the same virtual scene.
  • FIG. 6 is a schematic block diagram of a server provided by an embodiment of the present invention.
  • the server 6 of this embodiment includes one or more processors 60, a memory 61, and a computer program 62 stored in the memory 61 and operable on the processor 60.
  • the steps in the method embodiment of the above-mentioned respective data synchronization when the processor 60 executes the computer program 62 such as steps S401 to S402 shown in FIG. 5, or steps S501 to S505.
  • the computer program 62 can be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete this invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, the instruction segments being used to describe The execution of the computer program 62 in the server 6 is described.
  • the server includes, but is not limited to, processor 60, memory 61. It will be understood by those skilled in the art that FIG. 6 is only one example of the server 6, does not constitute a limitation to the server 6, may include more or less components than the illustration, or combine some components, or different components, For example, the server may also include an input device, an output device, a network access device, a bus, and the like.
  • the processor 60 may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 61 may be an internal storage unit of the server 6, such as a hard disk or a memory of the server 4.
  • the memory 61 may also be an external storage device of the server 6, such as a plug-in hard disk equipped on the server 6, a smart memory card (SMC), and a Secure Digital (SD) card. Flash card, etc.
  • the memory 61 may also include both an internal storage unit of the server 6 and an external storage device.
  • the memory 61 is used to store the computer program and other programs and data required by the server.
  • the memory 61 can also be used to temporarily store data that has been output or is about to be output.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in the form of a software product, or a part of the technical solution, which is stored in a storage medium, including several
  • the instructions are for causing a computer device or processor to perform all or part of the steps of the methods described in various embodiments of the embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟现实交互系统、方法以及服务器。所述方法包括:虚拟场景服务器接收每一动捕数据采集系统发来的动捕数据,以及来自每一虚拟场景客户端的操作命令;所述动捕数据采集系统包括至少两个且每一所述动捕数据采集系统至少对应本地的一个虚拟场景客户端;虚拟场景服务器根据接收到的所述动捕数据对所述操作命令进行响应,并将响应结果同步至每一所述虚拟场景客户端;以便所述虚拟场景客户端能够根据所述响应结果、本地的动捕数据采集系统采集的动捕数据以及本地的动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据,调整相应的虚拟场景,并将调整后的虚拟场景显示给用户。能够实现处于异地场景的不同用户在同一虚拟场景中的交互。

Description

虚拟现实交互系统、方法及计算机存储介质 技术领域
本发明属于虚拟现实交互技术领域,尤其涉及一种用于异地场景的虚拟交互系统、方法及计算机存储介质。
背景技术
当前,虚拟现实交互的流程一般是:获取用户的动捕数据(三维空间位置),然后将动捕数据传输给虚拟场景的服务器。服务器根据动捕数据确定用户在虚拟场景中的位置信息,从而进行相应的交互响应,并将响应结果同步显示给用户。在虚拟现实交互过程中,动捕数据的采集方式可以有多种,例如惯性动捕、激光动捕或光学动捕等。
在基于光学动捕技术的虚拟现实交互中,可以利用光学动捕系统中的多个动捕相机识别被观察对象上附着的光学标记点,通过动捕相机的图像采集系统处理计算出标记点的坐标位置信息(即动捕数据),然后经网络(有线,无线,USB等)传输给动捕相机的服务器。相机服务器接收来自动捕相机的坐标位置信息,根据该位置坐标信息识别被观察对象,获取用户在物理场景中的位置信息,然后将该物理场景中的位置信息发送给虚拟场景的服务器以及客户端。虚拟场景的服务器将该位置信息映射至虚拟场景中,从而确定用户在虚拟场景中的位置信息,并通过虚拟场景的客户端显示给用户。
当前,在上述虚拟现实交互流程中,如图3所示,动捕数据的走向具体为:虚拟场景服务器31以及虚拟场景客户端32均会从光学动捕系统33中获取对应的动捕数据,由于虚拟场景的服务器31、客户端32与光学动捕系统33的通信方式与同步方式均是基于局域网开发,因此当前系统的通信方式能够实现处于同一物理空间的虚拟现实交互。
随着虚拟现实交互技术的进一步应用,有了处于异地的用户在同一个虚拟场景中实现虚拟现实交互的需求,然而目前却没有很好的解决方法。
发明内容
鉴于此,本发明提供一种同步虚拟现实交互系统,能够实现处于异地场景的不同用户在同一虚拟场景中的交互。
本发明实施例的第一方面提供一种虚拟现实交互系统,所述系统包括:至少两个交互子系统,以及虚拟场景服务器;所述虚拟场景服务器运行于广域网;所述交互子系统包括:动捕数据采集系统以及至少一个虚拟场景客户端;
所述动捕数据采集系统,用于采集本地的动捕数据,并将所述动捕数据发送至本地的所述虚拟场景客户端、所述虚拟场景服务器和其它交互子系统中的动捕数据采集系统;
所述虚拟场景客户端,用于接收本地对应用户的操作命令,并将所述操作命令传给所述虚拟场景服务器;以及接收本地的所述动捕数据采集系统发来的动捕数据,和本地的所述动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据;
所述虚拟场景服务器,用于根据接收到的所有虚拟场景客户端传来的操作命令和所有动捕数据采集系统传来的动捕数据,进行相应的响应,并将响应结果同步至每一所述虚拟场景客户端;
所述虚拟场景客户端,用于根据响应结果、本地的动捕数据采集系统采集的动捕数据、以及本地的动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据,调整相应的虚拟场景,并将调整后的虚拟场景显示给用户。
本发明的实施例的第二方面提供一种虚拟现实交互方法,所述方法包括:
虚拟场景服务器接收每一动捕数据采集系统发来的动捕数据,以及来自每一虚拟场景客户端的操作命令;所述动捕数据采集系统包括至少两个且每一所述动捕数据采集系统至少对应本地的一个虚拟场景客户端;
虚拟场景服务器根据接收到的所述动捕数据对所述操作命令进行响应,并将响应结果同步至每一所述虚拟场景客户端;以便所述虚拟场景客户端能够根据所述响应结果、本地的动捕数据采集系统采集的动捕数据以及本地的动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据,调整相应的虚拟场景,并将调整后的虚拟场景显示给用户。
本发明实施例的第三方面提供一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述任一项所述的虚拟现实交互方法的步骤。
本发明实施例的第四方面提供一种所述计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现上述任一项所述的虚拟现实交互方法的步骤。
本发明与现有技术相比存在的有益效果是:
本发明提供的技术方案,在虚拟场景客户端在接收到用户的操作命令之后,将操作命令均上传至虚拟场景服务器。虚拟场景服务器作为控制中枢,根据接收到的所有用户的操作命令和所有用户的位置信息(动捕数据),对用户的操作命令进行相应并将响应结果下发至每一个虚拟场景客户端。每一个虚拟场景客户端根据接收到的响应结果,各个用户的以及客户端对应的用户的视角信息,进行相应的虚拟场景的渲染,并显示给用户,从而实现异地场景的多用户在同一虚拟场景的虚拟现实交互。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1是本发明提供的虚拟现实交互系统的第一实施例的流程示意图;
图2是本发明提供的虚拟现实交互系统的第二实施例的流程示意图;
图3是现有技术中的虚拟现实交互中动捕数据的流向示意图;
图4是本发明提供的虚拟现实交互方法的第一实施例的流程示意图;
图5是本发明提供的虚拟现实交互方法的第二实施例的流程示意图;
图6是本发明提供的服务器的实施例的示意框图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本发明说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本发明。如在本发明说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本发明说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
本发明实施例的虚拟现实交互方案适用于异地场景的虚拟现实交互,即处于不同物理区域的用户,可以在同一虚拟场景中实现交互。本文中将不同物理区域的用户又称为:异地用户、异地场景的用户、或不同动捕系统下的用户。下面,将通过具体的实施例进行详 细描述。
本发明提供的虚拟现实交互系统用于实现位于不同物理区域的用户在同一虚拟场景中的交互,包括:至少两个交互子系统,和虚拟场景服务器。每一个交互子系统包括:动捕数据采集系统和至少一个虚拟场景客户端。下面,将以交互系统100包括两个交互子系统,每个交互子系统中包括一个虚拟场景客户端为例进行详细描述。其中,一个虚拟场景客户端对应一个用户,虚拟场景客户端可以接收该虚拟场景客户端对应用户输入的操作命令。
如图1所示,是本发明提供的虚拟现实交互系统的第一实施例的结构框图,所述交互系统100用于实现位于不同物理区域的用户在同一虚拟场景中的交互,包括:位于第一区域11的动捕数据采集系统1011,以及与该动捕数据采集系统1011位于同一区域的虚拟场景客户端1012。位于第二区域12的动捕数据采集系统1021,以及与该动捕数据采集系统1021位于同一区域的虚拟场景客户端1022,以及还包括运行于广域网的虚拟场景服务器103。其中,第一区域11和第二区域12为不同的物理区域。
其中,动捕数据采集系统1011采集本地目标对象(目标对象包括用户或其他物件如游戏枪等,在后文均以目标对象具体为用户为例进行描述)的第一动捕数据,并将该第一动捕数据同步至虚拟场景客户端1012,同时动捕数据采集系统1011还将采集的第一动捕数据传送至动捕数据采集系统1021以及虚拟场景服务器103。类似地,动捕数据采集系统1021采集本地目标对象(目标对象包括用户或其他物件如游戏枪等,在后文均以目标对象具体为用户为例进行描述)的第二动捕数据,并将该第二动捕数据同步至客户端1022,同时动捕数据采集系统1021还将采集的第二动捕数据传送至动捕数据采集系统1011以及虚拟场景服务器103。
其中,动捕数据具体可以包括:刚体名称、刚体数据以及刚体标识号。接收到动捕数据的终端设备能够根据刚体名称和刚体标识号对刚体进行识别,确定该刚体所属的用户,同时还可以根据刚体数据确定该用户的位置信息。
其中,虚拟场景服务器103在接收到动捕数据采集系统1011采集的第一动捕数据以及动捕数据采集系统1021采集的第二动捕数据之后,由此可以获取虚拟场景中所有用户的位置信息。
对于虚拟场景客户端1012,其一方面接收本地的动捕数据采集系统1011传来的第一动捕数据,同时还可以接收本地的动捕数据采集系统1011传来的来自动捕数据采集系统1021的第二动捕数据。即是说,虚拟场景客户端1012可以获知处于不同物理区域的所有用户的位置信息。对于虚拟场景客户端1022,其一方面接收本地的动捕数据采集系统1021传来的第二动捕数据,同时还可以接收本地的动捕数据采集系统1021传来的来自动捕数据 采集系统1011的第一动捕数据。即是说,虚拟场景客户端1022可以获知处于不同物理区域的所有用户的位置信息。也就是说,即使用户处于不同物理区域,虚拟场景中的每一个客户端均能够获知该虚拟场景中所有用户的位置信息。
由此可见,动捕数据采集系统将采集的用户的动捕数据同步至本地客户端、其他区域的动捕数据采集系统以及虚拟场景服务器的目的是:实现对动捕数据的共享,使得不同的动捕数据采集系统之间、动捕数据采集系统与虚拟场景服务器能够共享虚拟场景中所有用户的动捕数据,达到类似在同一局域网的数据共享效果。通过动捕数据的共享,能够确定虚拟交互中各个用户在虚拟场景中的位置信息,保证异地虚拟现实交互的正常逻辑,实现虚拟现实交互的沉浸感。由于动捕数据采集系统将采集的动捕数据采用并行的方式同时同步给本地的虚拟场景客户端、其他动捕数据采集系统以及虚拟场景服务器,因此可以缩减同步时间,提升数据共享效率,进而缩减交互延时,提升交互体验。
其中,动捕数据采集系统1011和动捕数据采集系统1021之间在进行通信时,可以选用P2P通信方式。在建立动捕数据采集系统1011和动捕数据采集系统1021之间的P2P通信时,动捕数据采集系统1011和动捕数据采集系统1021分别主动向虚拟场景服务器103发送链接请求,并在链接请求中携带自身的IP信息。虚拟场景服务器103在接收到链接请求之后,会提取其中的IP信息,并把提取的所有IP信息同步给当前在线的动捕数据采集系统(1011,1021)。各个动捕数据采集系统在接收到所有的IP信息之后,方能和其他的动捕数据采集系统建立P2P通信连接。
可以理解的是,本发明实施例的动捕数据采集系统可以是惯性动捕数据采集系统、激光动捕数据采集系统、光学动捕数据采集系统或者其他类型的动捕数据采集系统。
在动捕数据采集系统对动捕数据进行了采集、同步之后,即进入下一阶段的交互过程。
虚拟场景客户端1012还可以接收本地对应用户(即虚拟场景客户端1012对应的用户)输入的操作命令,并将该操作命令转发给虚拟场景服务器103。同样的,虚拟场景客户端1022还可以接收本地对应用户(虚拟场景客户端1022对应的用户)输入的操作命令,并将该操作命令转发给虚拟场景服务器103。其中,操作命令是用户对虚拟场景中的人或物的操作指令。
具体地,用户可通过手柄或惯性姿态的方式向虚拟场景客户端输入操作命令。每一个虚拟场景客户端在接收到该操作命令之后,会将该操作命令转换成虚拟场服务器103能够识别的形式并传输给虚拟场景服务器103。即是说,虚拟场景服务器103可以获知该虚拟场景中处于不同物理区域的所有用户的操作命令。
虚拟场景服务器103的主要作用是:控制交互逻辑的正常进行。为实现处于不同物理 区域的用户在同一虚拟场景中的正常交互,虚拟场景服务器103需要获取所有用户在虚拟场景中的位置信息以及所有用户的操作命令。鉴于这两个条件在前文的描述中已经实现,因此虚拟场景服务器103可以根据接收到的所有用户的操作命令以及所有用户在虚拟场景中的位置信息进行相应的响应。并将响应结果同步至每一个虚拟场景客户端,如同步到虚拟场景客户端1012和虚拟场景客户端1022。
在将响应结果同步至虚拟场景客户端1012和虚拟场景客户端1022之后,对于接收到响应结果的每一个虚拟场景客户端,其需要根据响应结果进行相应虚拟场景的调整。具体调整方式是:虚拟场景客户端根据响应结果以及所有用户在虚拟场景中的位置信息(即第一动捕数据和第二动捕数据),以及本地用户(与该客户端对应的用户)的视角信息对虚拟场景进行调整,并将调整后的虚拟场景显示给用户。例如,可以将调整后的虚拟场景通过用户所戴的头盔显示给用户。至此,完成了处于不同物理区域的用户在同一虚拟场景下的交互。
可以理解的是,动捕数据采集系统的类型较多,例如,激光、惯性或者光学等,在以下实施例中,将以动捕数据采集系统为光学动捕数据采集系统为例进行详细说明。
请参见图2,是本发明提供的虚拟现实交互系统的第二实施例的结构框图,本发明实施例与图1所示的实施例的区别在于:本发明实施例对动捕数据采集系统的结构进行了具体化。本发明实施例的动捕数据采集系统,具体为光学动捕采集系统,每一个光学动捕采集系统均包括:相机服务器和多个动捕相机。下面,将具体描述。
如图2所示,动捕数据采集系统1011具体包括:多个动捕相机1011a和相机服务器1011b。同样的,动捕数据采集系统1021具体包括:多个动捕相机1021a和相机服务器1021b。
其中,动捕相机的作用是采集本地用户的动捕数据并传输至相应的相机服务器。具体来说,多个动捕相机1011a,用于采集本地用户的第一动捕数据并传输至相机服务器1011b。多个动捕相机1021a,用于采集本地用户的第二动捕数据并传输至相机服务器1021b。
其中,相机服务器的作用有:与其它动捕数据采集系统的相机服务器建立P2P通信、将本地的动捕数据共享至本地的虚拟场景客户端、虚拟场景服务器以及其它动捕数据采集系统中的相机服务器。具体来说,相机服务器1011b,用于与相机服务器1021b建立P2P通信,同时还将动捕相机1011a采集的第一动捕数据传输至本地的虚拟场景客户端1012、虚拟场景服务器103和相机服务器1021b。同样的,相机服务器1021b,用于与相机服务器1011b建立P2P通信,同时还将动捕相机1021a采集的第二动捕数据传输至本地的虚拟场景客户端1022、虚拟场景服务器103和相机服务器1011b。
需要说明的是,相机服务器1021b与相机服务器1011b之间建立P2P通信的方式可以 是:
相机服务器1021b向所述虚拟场景服务器103发送链接请求,该链接请求中携带有所述相机服务器1021b的IP信息;相机服务器1011b也向所述虚拟场景服务器103发送链接请求,该链接请求中携带有所述相机服务器1011b的IP信息。虚拟场景服务器103将接收到的相机服务器1021b的IP信息、相机服务器1011b的IP信息均同步至网络中的在线相机服务器(即相机服务器1021b和相机服务器1011b)。
对于相机服务器1021b,在接收到相机服务器1021b的IP信息和相机服务器1011b的IP信息之后,根据相机服务器1011b的IP信息向相机服务器1011b发起连接请求,以建立P2P通信。同样地,对于相机服务器1011b,其在接收到相机服务器1021b的IP信息和相机服务器1011b的IP信息之后,根据相机服务器1021b的IP信息向相机服务器1021b发起连接请求,以建立P2P通信。
在相机服务器之间建立P2P通信之后,便可实现动捕相机(1011a,1021a)采集的动捕数据在相机服务器之间的共享,这样相机服务器1021b可以获知相机服务器1011b的动捕数据,并在交互有需求时同步至本地的虚拟场景客户端1022;同样的,相机服务器1011b可以获知相机服务器1021b的动捕数据,并在交互有需求时同步至本地的虚拟场景客户端1012。如此,便可保证处于不同物理区域的各个虚拟场景客户端均能够获取虚拟场景交互中所有用户的动捕数据。
本发明实施例的虚拟现实交互系统,采用多个动捕相机与相机服务器相结合的方式来采集用户的动捕数据,并通过虚拟场景服务器作为中转,建立相机服务器之间的P2P通信,以实现动捕数据的共享,这样可以保证动捕数据的传输不会受到外界网络的干扰。同时,在虚拟场景客户端在接收到用户的操作命令之后,将操作命令均上传至虚拟场景服务器。虚拟场景服务器作为交互控制中枢,根据接收到的所有用户的操作命令和所有用户的位置信息(动捕数据),对用户的操作命令进行响应并将响应结果下发至每一个虚拟场景客户端。每一个虚拟场景客户端根据接收到的响应结果,各个用户的位置信息以及客户端对应的用户的视角信息,进行相应的虚拟场景的渲染,并显示给用户,从而实现处于异地场景的多用户在同一虚拟场景的虚拟现实交互。
可以理解的是,在虚拟现实交互过程中,用户是实时移动的,那么动捕数据采集系统采集的动捕数据也是连续的,即动捕数据包括多个时刻的动捕数据。动捕数据采集系统采集到动捕数据之后,通常需要将采集到的多个时刻的动捕数据进行同步,以保证动捕数据的完整性。
例外的情况是,若动捕数据采集的动捕数据的数据量较大时,全部共享采集的数据则会给网络带宽带来严重负荷,在网络环境不好时,则会造成响应延时,进而不满足虚拟现实的实时交互效果。因此可以考虑不进行全部数据共享,即只共享部分动捕数据,例如,将采集多个时刻的动捕数据按照预设时间间隔挑选出一部分来进行共享。此时,由于虚拟场景服务器以及相机服务器接收到的动捕数据都是不完全的,在交互响应过程中又会造成交互画面瞬移或卡顿。为此,在本发明第一实施例或第二实施例的交互系统工作中,当动捕数据的数据量太大时,不宜全部共享时,此时便可以对动捕数据的共享方式进行优化,优化之后不但能够减轻网络负荷,同时又能够避免虚拟现实交互过程中画面瞬移或卡顿的现象。下面,将具体描述优化方式。
具体来说,动捕数据采集系统在进行动捕数据共享时,如相机服务器(1011b,1021b)之间在进行动捕数据共享时,以及相机服务器(1011b,1021b)在将动捕数据共享给虚拟场景服务器103时,不将动捕相机采集的所有动捕数据进行共享,即只共享一部分。例如,动捕相机在T1、T2、T3、T4、T5时刻采集有5个动捕数据。那么可以考虑按照预设时间间隔挑选出5个动捕数据中的一部分(如T2时刻的动捕数据、T5时刻的动捕数据)并仅仅共享挑选出来的动捕数据,这样便可以缓解网络负荷。预设的时间间隔可以相等,也可以不相等。
只共享部分动捕数据虽然缓解了网络负担,但随之却带来了交互画面卡顿或瞬移的问题。为解决该问题,可以考虑在虚拟场景服务器和相机服务器端对接收到的动捕数据进行线性插值处理,以模拟出未上传时刻的动捕数据并根据模拟出的动捕数据进行画面渲染,进而避免交互过程中的交互画面卡顿或瞬移的问题。
虚拟场景服务器103在接收到上传的挑选出来的部分动捕数据时,需要进行线性差值处理,具体线性差值处理方法具体为:
例如,虚拟场景服务器103在接收到T2时刻的动捕数据时,根据T2时刻的动捕数据,确定用户在虚拟场景中的位置信息为B点。同时,虚拟场景服务器还查看自身记录的该用户当前在虚拟场景中的位置信息为A点。然后,根据A点和B点、获取的插值时间间隔以及预设时间间隔(T2与T5之间的时间差)进行线性插值处理,以保证用户从A点移动到B点时,虚拟场景服务器103刚好能接收到T5时刻的动捕数据。
具体地,虚拟场景服务器103在根据A点和B点、获取的插值时间间隔以及预设时间间隔(T2与T5之间的时间差)进行线性插值处理的方式可以是:
将A点作为起点位置,B点作为终点位置,T2与T5之间的时间差作为预设时间间隔,以及获取的插值时间间隔按照如下公式计算A点与B点之间的插值数据,即模拟的该用户 的位置信息:
xn=xn-1+(X×Tn-1,n)/T0
yn=yn-1+(Y×Tn-1,n)/T0
zn=zn-1+(Z×Tn-1,n)/T0
其中,(xn,yn,zn)表示第n个插值位置在三维坐标系下的坐标,n=1,2,3,……;当n=1时,(x0,y0,z0)表示起点位置坐标;(X,Y,Z)表示从起点位置A到终点位置B的三维坐标系下的向量;Tn-1,n表示从第n-1个插值位置到第n个插值位置所需的时间(插值时间间隔),该时间可以设定或基于运行平台获取;T0表示预设时间间隔。(X,Y,Z)可以根据起点位置A的坐标位置、B的坐标位置得出。
在采用上述公式模拟出A点与B点之间用户的位置信息之后,便可按照该模拟出的位置信息进行相应响应。
相同地,相机服务器(1011b,1021b)也可以按照虚拟场景服务器相同的方式进行用户位置信息的模拟,并将模拟得到的位置信息同步至本地的虚拟场景客户端(1011,1012),以便虚拟场景客户端(1011,1012)能够根据模拟出的位置信息进行相应虚拟场景的渲染,进而能够保证交互画面的流畅性。
上述图1至图2对异地场景的虚拟现实交互系统进行了详细的描述,下面将结合附图,对应用上述交互系统进行异地场景的虚拟现实交互方法、计算机可读存储介质进行详细描述。为避免赘述,上文中已经描述的术语在下文中可能不再做重复说明。
如图4所示,是本发明实施例提供的应用于异地场景的虚拟现实交互方法的第一实施例的流程示意图。所述虚拟现实交互方法可以运行于图1、图2所示的交互系统上,本发明实施例中,从虚拟场景服务器侧对虚拟现实交互方法进行描述。该虚拟现实交互方法包括如下步骤:
步骤401,虚拟场景服务器接收每一动捕数据采集系统发来的动捕数据,以及来自每一虚拟场景客户端的操作命令。
由于该虚拟现实交互方法运行于图1、图2所示的交互系统中。因此,虚拟场景服务器可以接收到至少两个动捕数据采集系统发来的动捕数据。该至少两个动捕数据采集系统位于不同的动捕数据采集区域,即至少两个动捕数据采集系统位于不同物理区域或位于异地,且每一动捕数据采集系统至少对应本地的一个虚拟场景客户端。其中,动捕数据是动捕数据采集系统采集的本地用户的动捕数据。对于每一动捕数据采集系统,在采集本地的 动捕数据之后,需要将该动捕数据同步至本地的虚拟场景客户端,同时还将采集的动捕数据传送至动捕数据采集系统以及虚拟场景服务器。
其中,动捕数据具体可以包括:刚体名称、刚体数据以及刚体标识号。接收到动捕数据的终端设备能够根据刚体名称和刚体标识号对刚体进行识别,确定该刚体所属的用户,同时还可以根据刚体数据确定该用户的位置信息。
其中,虚拟场景服务器在接收到两个动捕数据采集系统采集的动捕数据之后,由此可以获取虚拟场景中所有用户的位置信息。
对于每一虚拟场景客户端,其一方面接收本地的动捕数据采集系统传来的动捕数据,同时还可以接收本地的动捕数据采集系统传来的来自其他动捕数据采集系统的动捕数据。即是说,虚拟场景客户端可以获知处于不同物理区域的所有用户的位置信息。也就是说,即使用户处于不同物理区域,虚拟场景中的每一个客户端均能够获知该虚拟场景中所有用户的位置信息。
由此可见,动捕数据采集系统将采集的本地的动捕数据同步至本地客户端、其他区域的动捕数据采集系统以及虚拟场景服务器的目的是:实现对动捕数据的共享,使得不同的动捕数据采集系统之间,以及动捕数据采集系统与虚拟场景服务器能够共享虚拟场景中所有用户的动捕数据,达到类似在同一局域网的数据共享效果。通过动捕数据的共享,能够确定虚拟交互中各个用户在虚拟场景中的位置信息,保证异地虚拟现实交互的正常逻辑,实现虚拟现实交互的沉浸感。由于动捕数据采集系统将采集的动捕数据采用并行的方式,同时同步给本地的虚拟场景客户端、其他动捕数据采集系统以及虚拟场景服务器,这样可以缩减同步时间,提升数据共享效率,进而缩减交互延时,提升交互体验。
其中,动捕数据采集系统之间在进行通信时,可以通过虚拟场景服务器建立二者之间的P2P通信。即是说,在接收动捕数据和操作命令之前,可以通过虚拟场景服务器建立动捕数据采集系统之间的P2P通信。并且,本发明实施例的动捕数据采集系统可以是惯性动捕数据采集系统、激光动捕数据采集系统、光学动捕数据采集系统或者其他类型的动捕数据采集系统。
步骤402,虚拟场景服务器根据接收到的所述动捕数据对所述操作命令进行响应,并将响应结果同步至每一所述虚拟场景客户端。
在动捕数据采集系统对动捕数据进行了采集、同步之后,即进入下一阶段的交互过程。
其中,每一虚拟场景客户端还可以接收本地对应用户(即虚拟场景客户端对应的用户)输入的操作命令,并将该操作命令转发给虚拟场景服务器。其中,操作命令是用户对虚拟场景中的人或物的操作指令。具体地,用户可通过手柄或惯性姿态的方式向虚拟场景客户 端输入操作命令。每一个虚拟场景客户端在接收到该操作命令之后,会将该操作命令转换成虚拟场服务器能够识别的形式并传输给虚拟场景服务器。即是说,虚拟场景服务器可以获知该虚拟场景中处于不同物理区域的所有用户的操作命令。
虚拟场景服务器的主要作用是:控制交互逻辑的正常进行。为实现处于不同物理区域的用户在同一虚拟场景中的正常交互,虚拟场景服务器需要获取所有用户在虚拟场景中的位置信息以及所有用户的操作命令。鉴于这两个条件在前文的描述中已经实现,因此虚拟场景服务器可以根据接收到的所有用户的操作命令以及所有用户在虚拟场景中的位置信息进行相应的响应。并将响应结果同步至每一个虚拟场景客户端。
在将响应结果同步至每一虚拟场景客户端之后,对于接收到响应结果的每一个虚拟场景客户端,其需要根据响应结果进行相应虚拟场景的调整。具体调整方式是:虚拟场景客户端根据响应结果以及所有用户在虚拟场景中的位置信息,以及本地用户(与该客户端对应的用户)的视角信息对虚拟场景进行调整,并将调整后的虚拟场景显示给用户。例如,可以将调整后的虚拟场景通过用户所戴的头盔显示给用户。至此,完成了处于不同物理区域的用户在同一虚拟场景下的交互。
可以理解的是,动捕数据采集系统的类型较多,例如,激光、惯性或者光学等,在以下实施例中,将以动捕数据采集系统为光学动捕数据采集系统为例进行详细说明。
如图5所示,是本发明实施例提供的应用于异地场景的虚拟现实交互方法的第二实施例的流程示意图。所述虚拟现实交互方法可以运行于图1、图2所示的交互系统上。本发明实施例中,从虚拟场景服务器侧对虚拟现实交互方法进行描述。本发明实施例与图4所示的实施例的区别在于:所述动捕数据采集系统为光学动捕采集系统,包括:多个动捕相机和相机服务器。因此在本步骤中,对通过虚拟场景服务器建立P2P通信的方式,以及接收动捕数据的方式进行了具体描述,下面将详细说明。
步骤501,虚拟场景服务器接收每一所述相机服务器发来的链接请求。
步骤502,虚拟场景服务器从所述链接请求中提取所述相机服务器的IP信息。
步骤503,虚拟场景服务器将提取的所有相机服务器的IP信息同步至网络中的每一所述相机服务器;以便每一所述相机服务器能够根据接收到的IP信息,与其它相机服务器建立P2P通信。
步骤504,虚拟场景服务器接收每一动捕数据采集系统发来的动捕数据,以及来自每一虚拟场景客户端的操作命令。
步骤505,虚拟场景服务器根据接收到的所述动捕数据对所述操作命令进行响应,并 将响应结果同步至每一所述虚拟场景客户端。
由上面的步骤可以看出,动捕相机的作用是采集本地用户的动捕数据并传输至相应的相机服务器。而相机服务器的作用有:与其它动捕数据采集系统的相机服务器建立P2P通信、将本地的动捕数据共享至本地的虚拟场景客户端、虚拟场景服务器以及其它动捕数据采集系统中的相机服务器。
需要说明的是,相机服务器之间建立P2P通信的方式可以是:
每一相机服务器均向所述虚拟场景服务器发送链接请求,该链接请求中携带有所述相机服务器的IP信息。虚拟场景服务器从接收到的链接请求中提取相机服务器的IP信息,并将提取的所有相机服务器的IP信息同步至网络中的每一在线相机服务器。
对于每一相机服务器,在接收到来自虚拟场景服务器的所有相机服务器的IP信息之后,根据接收到的IP信息,向其他相机服务器发送链接请求,以建立P2P通信。
在相机服务器之间建立P2P通信之后,便可实现动捕相机采集的动捕数据在相机服务器之间的共享,这样相机服务器可以获知其余相机服务器的动捕数据,并在交互有需求时同步至本地的虚拟场景客户端;如此,便可保证处于不同物理区域的各个虚拟场景客户端均能够获取虚拟场景交互中所有用户的动捕数据。
本发明实施例的虚拟现实交互方法,采用多个动捕相机与相机服务器相结合的方式来采集用户的动捕数据,并通过虚拟场景服务器作为中转,建立相机服务器之间的P2P通信,以实现动捕数据的共享,这样可以保证动捕数据的传输不会受到外界网络的干扰。同时,在虚拟场景客户端在接收到用户的操作命令之后,将操作命令均上传至虚拟场景服务器。虚拟场景服务器作为交互控制中枢,根据接收到的所有用户的操作命令和所有用户的位置信息(动捕数据),对用户的操作命令进行响应并将响应结果下发至每一个虚拟场景客户端。每一个虚拟场景客户端根据接收到的响应结果,各个用户的位置信息以及客户端对应的用户的视角信息,进行相应的虚拟场景的渲染,并显示给用户,从而实现处于异地场景的多用户在同一虚拟场景的虚拟现实交互。
如图6所示,本发明实施例提供的服务器的示意框图。如图6所示,该实施例的服务器6包括:一个或多个处理器60、存储器61以及存储在所述存储器61中并可在所述处理器60上运行的计算机程序62。所述处理器60执行所述计算机程序62时实现上述各个数据同步的方法实施例中的步骤,例如图5所示的步骤S401至S402,或者步骤S501至S505。
示例性的,所述计算机程序62可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器61中,并由所述处理器60执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描 述所述计算机程序62在所述服务器6中的执行过程。
所述服务器包括但不仅限于处理器60、存储器61。本领域技术人员可以理解,图6仅仅是服务器6的一个示例,并不构成对服务器6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述服务器还可以包括输入设备、输出设备、网络接入设备、总线等。
所述处理器60可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器61可以是所述服务器6的内部存储单元,例如服务器4的硬盘或内存。所述存储器61也可以是所述服务器6的外部存储设备,例如所述服务器6上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述服务器6的内部存储单元也包括外部存储设备。所述存储器61用于存储所述计算机程序以及所述服务器所需的其他程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备或处理器执行本发明实施例各个实施例所述方法的全部或部分步骤。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例各实施例技术方案的精神和范围。

Claims (14)

  1. 虚拟现实交互系统,其特征在于,所述系统包括:至少两个交互子系统,以及虚拟场景服务器;所述虚拟场景服务器运行于广域网;所述交互子系统包括:动捕数据采集系统以及至少一个虚拟场景客户端;
    所述动捕数据采集系统,用于采集本地目标对象的动捕数据,并将所述动捕数据发送至本地的所述虚拟场景客户端、所述虚拟场景服务器和其它交互子系统中的动捕数据采集系统;
    所述虚拟场景客户端,用于接收本地对应用户的操作命令,并将所述操作命令传给所述虚拟场景服务器;以及接收本地的所述动捕数据采集系统发来的动捕数据,和本地的所述动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据;
    所述虚拟场景服务器,用于根据接收到的所有虚拟场景客户端传来的操作命令和所有动捕数据采集系统传来的动捕数据,进行相应的响应,并将响应结果同步至每一所述虚拟场景客户端;
    所述虚拟场景客户端,用于根据所述响应结果、本地的动捕数据采集系统采集的动捕数据、以及本地的动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据,调整相应的虚拟场景,并将调整后的虚拟场景显示给用户。
  2. 根据权利要求1所述的虚拟现实交互系统,其特征在于,所述动捕数据采集系统还用于与其它交互子系统中的动捕数据采集系统建立P2P通信。
  3. 根据权利要求2所述的虚拟现实交互系统,所述动捕数据采集系统为光学动捕采集系统,包括:多个动捕相机和相机服务器;
    所述动捕相机用于采集本地目标对象的动捕数据并传输至所述相机服务器;
    所述相机服务器,具体用于与其它动捕数据采集系统中的相机服务器建立P2P通信,将所述动捕数据同步至本地的虚拟场景客户端,以及还将所述动捕数据上传至所述虚拟场景服务器和其它动捕数据采集系统中的相机服务器。
  4. 根据权利要求3所述的虚拟现实交互系统,其特征在于,所述相机服务器与其它交互子系统中的相机服务器建立P2P通信时,所述相机服务器具体用于:
    向所述虚拟场景服务器发送链接请求;所述链接请求中携带有所述相机服务器的IP信息;以便所述虚拟场景服务器将接收到的所有相机服务器的IP信息同步至网络中的每一所述相机服务器;
    所述相机服务器还用于接收所述虚拟场景服务器传来的所有相机服务器的IP信息,并根据所述IP信息,与其它相机服务器建立P2P通信。
  5. 根据权利要求所1述的虚拟现实交互系统,其特征在于,所述动捕数据包括:刚体名称、刚体数据以及刚体标识号。
  6. 根据权利要求1所述的虚拟现实交互系统,其特征在于,所述动捕数据包括多个时刻的动捕数据,所述动捕数据采集系统具体用于:
    按照预设时间间隔将所述多个时刻中部分时刻的动捕数据发送至所述虚拟场景服务器和其它交互子系统中的动捕数据采集系统。
  7. 根据权利要求6所述的虚拟现实交互系统,其特征在于,所述虚拟场景服务器,具体用于根据当前时刻接收到的动捕数据,确定用户在虚拟场景中的位置信息,并将该位置信息作为终点信息;以及将所述虚拟场景服务器中记录的所述用户的位置信息作为起点信息;根据所述起点位置、终点位置、获取的插值时间间隔以及所述预设时间间隔,进行线性插值处理,以模拟出所述用户在所述起点位置和终点位置之间的其它位置信息,以进行相应的响应;
    所述相机服务器,具体用于根据当前时刻接收到的动捕数据,确定用户在虚拟场景中的位置信息,并将该位置信息作为终点信息;以及将所述虚拟场景服务器中记录的所述用户当前的位置信息作为起点信息;根据所述起点位置、终点位置、获取的插值时间间隔以及所述预设时间间隔,进行线性插值处理,以模拟出所述用户在所述起点位置和终点位置之间的其它位置信息,并同步至本地的虚拟场景客户端。
  8. 虚拟现实交互方法,其特征在于,所述方法包括:
    虚拟场景服务器接收每一动捕数据采集系统发来的动捕数据,以及来自每一虚拟场景客户端的操作命令;所述动捕数据采集系统包括至少两个且每一所述动捕数据采集系统至少对应本地的一个虚拟场景客户端;
    虚拟场景服务器根据接收到的所述动捕数据对所述操作命令进行响应,并将响应结果同步至每一所述虚拟场景客户端;以便所述虚拟场景客户端能够根据所 述响应结果、本地的动捕数据采集系统采集的动捕数据以及本地的动捕数据采集系统传来的来自其它动捕数据采集系统的动捕数据,调整相应的虚拟场景,并将调整后的虚拟场景显示给用户。
  9. 根据权利要求8所述的虚拟现实交互方法,其特征在于,在所述虚拟场景服务器接收每一所述动捕数据采集系统发来的动捕数据之前,所述方法还包括:
    所述虚拟场景服务器建立所述动捕数据采集系统之间的P2P通信。
  10. 根据权利要求9所述的虚拟现实交互方法,所述动捕数据采集系统为光学动捕采集系统,包括:多个动捕相机和相机服务器;所述虚拟场景服务器接收来自每一动捕数据采集系统的动捕数据,具体包括:
    所述虚拟场景服务器接收来自所述相机服务器的动捕数据;所述动捕数据是所述动捕相机采集的本地目标对象的动捕数据。
  11. 根据权利要求10所述的虚拟现实交互方法,其特征在于,所述虚拟场景服务器建立所述动捕数据采集系统之间的P2P通信,具体包括:
    所述虚拟场景服务器接收每一所述相机服务器发来的链接请求;
    所述虚拟场景服务器从所述链接请求中提取所述相机服务器的IP信息;
    所述虚拟场景服务器将提取的所有相机服务器的IP信息同步至网络中的每一所述相机服务器,以便每一所述相机服务器能够根据接收到的IP信息,与其它相机服务器建立P2P通信。
  12. 根据权利要求8所述的虚拟现实交互方法,其特征在于,所述动捕数据包括:刚体名称、刚体数据以及刚体标识号。
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求8至12任一项所述方法的步骤。
  14. 一种服务器,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求8至12任一项所述方法的步骤。
PCT/CN2017/099011 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质 WO2019037074A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2017/099011 WO2019037074A1 (zh) 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质
CN201780000973.7A CN109313484B (zh) 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质
CN202210083807.0A CN114527872B (zh) 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/099011 WO2019037074A1 (zh) 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2019037074A1 true WO2019037074A1 (zh) 2019-02-28

Family

ID=65205393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/099011 WO2019037074A1 (zh) 2017-08-25 2017-08-25 虚拟现实交互系统、方法及计算机存储介质

Country Status (2)

Country Link
CN (2) CN109313484B (zh)
WO (1) WO2019037074A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110108159A (zh) * 2019-06-03 2019-08-09 武汉灏存科技有限公司 大空间多人交互的仿真模拟系统及方法
CN110989837A (zh) * 2019-11-29 2020-04-10 上海海事大学 一种用于邮轮体验的虚拟现实系统
CN111338481A (zh) * 2020-02-28 2020-06-26 武汉灏存科技有限公司 基于全身动态捕捉的数据交互系统及方法
CN111381792A (zh) * 2020-03-12 2020-07-07 上海曼恒数字技术股份有限公司 一种支持多人协同的虚拟现实数据传输方法及系统
CN111796670A (zh) * 2020-05-19 2020-10-20 北京北建大科技有限公司 大空间多人虚拟现实交互系统及方法
CN111988375A (zh) * 2020-08-04 2020-11-24 深圳市瑞立视多媒体科技有限公司 终端的定位方法、装置、设备及存储介质
CN112130660A (zh) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和系统
CN112150246A (zh) * 2020-09-25 2020-12-29 刘伟 一种3d数据采集系统及其应用
CN112256125A (zh) * 2020-10-19 2021-01-22 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法
CN112423020A (zh) * 2020-05-07 2021-02-26 上海哔哩哔哩科技有限公司 动作捕捉数据分发、获取方法及系统
CN114051148A (zh) * 2021-11-10 2022-02-15 拓胜(北京)科技发展有限公司 一种虚拟主播生成方法、装置及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110471772B (zh) * 2019-08-19 2022-03-15 上海云绅智能科技有限公司 一种分布式系统及其渲染方法、客户机
CN110610547B (zh) * 2019-09-18 2024-02-13 瑞立视多媒体科技(北京)有限公司 基于虚拟现实的座舱实训方法、系统及存储介质
CN110609622A (zh) * 2019-09-18 2019-12-24 深圳市瑞立视多媒体科技有限公司 结合3d与虚拟现实技术实现多人交互方法、系统及介质
CN111047710B (zh) * 2019-12-03 2023-12-26 深圳市未来感知科技有限公司 虚拟现实系统及交互设备显示方法和计算机可读存储介质
CN115114537B (zh) * 2022-08-29 2022-11-22 成都航空职业技术学院 一种基于文件内容识别的互动虚拟教具实现方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323129A (zh) * 2015-12-04 2016-02-10 上海弥山多媒体科技有限公司 一种家庭虚拟现实娱乐系统
CN105892686A (zh) * 2016-05-05 2016-08-24 刘昊 一种3d虚拟现实广播交互方法及系统
CN106534125A (zh) * 2016-11-11 2017-03-22 厦门汇鑫元软件有限公司 一种基于局域网实现vr多人交互系统的方法

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8730156B2 (en) * 2010-03-05 2014-05-20 Sony Computer Entertainment America Llc Maintaining multiple views on a shared stable virtual space
EP2152377A4 (en) * 2007-04-17 2013-07-31 Bell Helicopter Textron Inc COLLABORATIVE VIRTUAL REALITY SYSTEM USING MULTIPLE MOTION CAPTURE SYSTEMS AND MULTIPLE INTERACTIVE CLIENTS
KR20090043192A (ko) * 2007-10-29 2009-05-06 (주)인텔리안시스템즈 원격 제어 시스템 및 방법
KR20130095904A (ko) * 2012-02-21 2013-08-29 (주)드리밍텍 가상 환경 관리 시스템 및 그 서버
CN103929479B (zh) * 2014-04-10 2017-12-12 惠州Tcl移动通信有限公司 移动终端模拟真实场景实现用户互动的方法及系统
US10007334B2 (en) * 2014-11-13 2018-06-26 Utherverse Digital Inc. System, method and apparatus of simulating physics in a virtual environment
CN104469442A (zh) * 2014-11-21 2015-03-25 天津思博科科技发展有限公司 一种通过智能终端实现集体歌唱的装置
US9769536B2 (en) * 2014-12-26 2017-09-19 System73, Inc. Method and system for adaptive virtual broadcasting of digital content
CN104866101B (zh) * 2015-05-27 2018-04-27 世优(北京)科技有限公司 虚拟对象的实时互动控制方法及装置
CN105450736B (zh) * 2015-11-12 2020-03-17 小米科技有限责任公司 与虚拟现实连接的方法和装置
CN106125903B (zh) * 2016-04-24 2021-11-16 林云帆 多人交互系统及方法
CN105915849A (zh) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 一种虚拟现实体育赛事播放方法及系统
CN106383578B (zh) * 2016-09-13 2020-02-04 网易(杭州)网络有限公司 虚拟现实系统、虚拟现实交互装置及方法
CN106598229B (zh) * 2016-11-11 2020-02-18 歌尔科技有限公司 一种虚拟现实场景的生成方法、设备及虚拟现实系统
CN106843460B (zh) * 2016-12-13 2019-08-02 西北大学 基于多摄像头的多目标位置捕获定位系统及方法
CN106843532A (zh) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 一种虚拟现实场景的实现方法和装置
CN106774949A (zh) * 2017-03-09 2017-05-31 北京神州四达科技有限公司 协同仿真交互方法、装置和系统
CN106843507B (zh) * 2017-03-24 2024-01-05 苏州创捷传媒展览股份有限公司 一种虚拟现实多人互动的方法及系统
CN107024995A (zh) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 多人虚拟现实交互系统及其控制方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323129A (zh) * 2015-12-04 2016-02-10 上海弥山多媒体科技有限公司 一种家庭虚拟现实娱乐系统
CN105892686A (zh) * 2016-05-05 2016-08-24 刘昊 一种3d虚拟现实广播交互方法及系统
CN106534125A (zh) * 2016-11-11 2017-03-22 厦门汇鑫元软件有限公司 一种基于局域网实现vr多人交互系统的方法

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110108159A (zh) * 2019-06-03 2019-08-09 武汉灏存科技有限公司 大空间多人交互的仿真模拟系统及方法
CN110108159B (zh) * 2019-06-03 2024-05-17 武汉灏存科技有限公司 大空间多人交互的仿真模拟系统及方法
CN110989837A (zh) * 2019-11-29 2020-04-10 上海海事大学 一种用于邮轮体验的虚拟现实系统
CN110989837B (zh) * 2019-11-29 2023-03-24 上海海事大学 一种用于邮轮体验的虚拟现实系统
CN111338481A (zh) * 2020-02-28 2020-06-26 武汉灏存科技有限公司 基于全身动态捕捉的数据交互系统及方法
CN111338481B (zh) * 2020-02-28 2023-06-23 武汉灏存科技有限公司 基于全身动态捕捉的数据交互系统及方法
CN111381792A (zh) * 2020-03-12 2020-07-07 上海曼恒数字技术股份有限公司 一种支持多人协同的虚拟现实数据传输方法及系统
CN111381792B (zh) * 2020-03-12 2023-06-02 上海曼恒数字技术股份有限公司 一种支持多人协同的虚拟现实数据传输方法及系统
CN112423020A (zh) * 2020-05-07 2021-02-26 上海哔哩哔哩科技有限公司 动作捕捉数据分发、获取方法及系统
CN111796670A (zh) * 2020-05-19 2020-10-20 北京北建大科技有限公司 大空间多人虚拟现实交互系统及方法
CN111988375B (zh) * 2020-08-04 2023-10-27 瑞立视多媒体科技(北京)有限公司 终端的定位方法、装置、设备及存储介质
CN111988375A (zh) * 2020-08-04 2020-11-24 深圳市瑞立视多媒体科技有限公司 终端的定位方法、装置、设备及存储介质
CN112130660A (zh) * 2020-08-14 2020-12-25 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和系统
US11720169B2 (en) 2020-08-14 2023-08-08 Qingdao Pico Technology Co., Ltd. Interaction method and system based on virtual reality equipment
CN112130660B (zh) * 2020-08-14 2024-03-15 青岛小鸟看看科技有限公司 一种基于虚拟现实一体机的交互方法和系统
CN112150246A (zh) * 2020-09-25 2020-12-29 刘伟 一种3d数据采集系统及其应用
CN112256125B (zh) * 2020-10-19 2022-09-13 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法
CN112256125A (zh) * 2020-10-19 2021-01-22 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法
CN114051148A (zh) * 2021-11-10 2022-02-15 拓胜(北京)科技发展有限公司 一种虚拟主播生成方法、装置及电子设备

Also Published As

Publication number Publication date
CN114527872A (zh) 2022-05-24
CN109313484B (zh) 2022-02-01
CN109313484A (zh) 2019-02-05
CN114527872B (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
WO2019037074A1 (zh) 虚拟现实交互系统、方法及计算机存储介质
CN109874021B (zh) 直播互动方法、装置及系统
JP6957215B2 (ja) 情報処理装置、情報処理方法及びプログラム
US20220156986A1 (en) Scene interaction method and apparatus, electronic device, and computer storage medium
US10306212B2 (en) Methods and systems for capturing a plurality of three-dimensional sub-frames for use in forming a volumetric frame of a real-world scene
WO2019111817A1 (ja) 生成装置、生成方法及びプログラム
WO2012134572A1 (en) Collaborative image control
KR20120086795A (ko) 원격으로 증강현실 서비스를 공유하는 증강현실 시스템 및 그 방법
JP2013061937A (ja) ステレオカメラ及びステレオディスプレイを組み合わせたやり取り
US20220067974A1 (en) Cloud-Based Camera Calibration
US10848597B1 (en) System and method for managing virtual reality session technical field
CN113282257B (zh) 用于同步显示的方法、终端设备、设备和可读存储介质
KR20120086796A (ko) 이종 마커를 이용해서 원격으로 증강현실 서비스를 공유하는 증강현실 시스템 및 그 방법
CN112783700A (zh) 用于基于网络的远程辅助系统的计算机可读介质
JP2019022151A (ja) 情報処理装置、画像処理システム、制御方法、及び、プログラム
CN108765084B (zh) 一种虚拟三维空间的同步处理方法及装置
JP2017010536A5 (ja) サーバの制御方法およびシステム
JP2019103126A (ja) カメラシステム、カメラ制御装置、カメラ制御方法及びプログラム
Bortolon et al. Multi-view data capture for dynamic object reconstruction using handheld augmented reality mobiles
CN111562841B (zh) 虚拟现实系统的异地联机方法、装置、设备及存储介质
JP6149967B1 (ja) 動画配信サーバ、動画出力装置、動画配信システム、及び動画配信方法
WO2019037073A1 (zh) 一种数据同步的方法、装置及服务器
KR101649754B1 (ko) 다시점 카메라를 위한 분산 시스템에서 제어 신호 전달 방법 및 다시점 카메라를 위한 분산 시스템
CN108989327B (zh) 一种虚拟现实服务器系统
US20190356758A1 (en) Methods for visualizing and interacting with a three dimensional object in a collaborative augmented reality environment and apparatuses thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17922819

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17922819

Country of ref document: EP

Kind code of ref document: A1