CN108989327B - Virtual reality server system - Google Patents

Virtual reality server system Download PDF

Info

Publication number
CN108989327B
CN108989327B CN201810885309.1A CN201810885309A CN108989327B CN 108989327 B CN108989327 B CN 108989327B CN 201810885309 A CN201810885309 A CN 201810885309A CN 108989327 B CN108989327 B CN 108989327B
Authority
CN
China
Prior art keywords
server
information
virtual reality
cloud
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810885309.1A
Other languages
Chinese (zh)
Other versions
CN108989327A (en
Inventor
孟宪民
李小波
赵德贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN201810885309.1A priority Critical patent/CN108989327B/en
Publication of CN108989327A publication Critical patent/CN108989327A/en
Application granted granted Critical
Publication of CN108989327B publication Critical patent/CN108989327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Abstract

The application relates to the technical field of virtual reality, in particular to a virtual reality server system, which adopts a distributed transceiving mode, and the network flow load of each server is relatively small in the aspect of network transmission; when all servers run simultaneously, the load of each server can be effectively reduced, and the delay of the algorithm is reduced when various calculations are carried out; also in supporting various VR device video streams, because of the multi-server mode, data requests of different VR clients can be processed in different servers. Due to the fact that network load is small, delay is small, and different data requests can be supported, effective segmentation presentation of different roles can be achieved on the basis that VR display is not affected, and multi-user online virtual reality experience is achieved in the virtual reality technology.

Description

Virtual reality server system
Technical Field
The application relates to the technical field of virtual reality, in particular to a virtual reality server system.
Background
Virtual Reality (VR) technology is a computer simulation system that can create and experience a Virtual world, and uses a computer to generate a simulation environment, specifically, a system simulation of an interactive three-dimensional dynamic view and entity behavior with multi-source information fusion.
Currently, virtual reality technology utilizes computing to generate realistic three-dimensional vision, hearing, touch and even taste, etc., so that a user interacts with objects of a virtual world using appropriate devices, such as a mouse, a keyboard, a screen, a helmet, glasses, stereo headphones, sensing gloves, etc., and is immersed in the simulated environment. However, the architecture of the virtual reality technology in the prior art does not support multiple platforms and multiple people online, so the virtual reality technology in the prior art cannot realize the multiple people online virtual reality experience.
Therefore, in the virtual reality technology, how to realize the online virtual reality experience of multiple people is a technical problem which needs to be solved urgently by those skilled in the art at present.
Disclosure of Invention
The application provides a virtual reality server system for realizing online virtual reality experience of multiple persons in a virtual reality technology.
In order to solve the technical problem, the application provides the following technical scheme:
a virtual reality server system, comprising: set up in the low latitude cloud server in VR studio, the low latitude cloud server includes: the system comprises a data acquisition input server, a camera attitude server, a role segmentation server and a motion capture server; the data acquisition input server collects data information of the VR studio; the camera attitude server obtains camera attitude information of the VR studio according to the data information and sends the camera attitude information to the VR client or the cloud server; the role segmentation server performs image matting and segmentation on the video information in the data information to obtain independent role information, and sends the independent role information to a VR client or a cloud server; and the motion capture server acquires the motion information of the role of the VR studio according to the data information and sends the motion information to the VR client or the cloud server.
The virtual reality server system as described above, wherein the camera position server preferably identifies and positions a camera position of the VR studio by video information in the data information, and obtains camera position information.
The virtual reality server system as described above, wherein preferably, the low altitude cloud server further includes: and the expression capturing server acquires the role expression information according to the data information and sends the role expression information to the VR client or the cloud server.
The virtual reality server system as described above, wherein preferably, the low altitude cloud server further includes: and the motion analysis server acquires the skeleton information of the role according to the depth video information in the data information and sends the skeleton information to the VR client or the cloud server.
In the virtual reality server system, it is preferable that the motion analysis server obtains the skeleton information of the character, matches the skeleton information with the motion information in the motion database, and sends the motion information number matched in the motion database to the VR client or the cloud server.
The virtual reality server system as described above, wherein preferably, the low altitude cloud server further includes: and the acousto-optic synchronous server acquires acousto-optic information of the VR studio according to the data information and sends the acousto-optic information to the VR client or the cloud server.
The virtual reality server system as described above, preferably further comprising: and the broadcast control cloud data distribution server sends the data stored by the cloud server to the VR client through the low-altitude cloud server.
The virtual reality server system as described above, wherein preferably, the cloud server includes: the system comprises a special format packet forwarding server and an information storage server, wherein the special format packet server converts the format of non-video data stored by the information storage server and sends the non-video data to a broadcast control cloud data distribution server.
The virtual reality server system as described above, wherein preferably, the cloud server further includes: and the video stream conversion server encodes and converts the video data stored by the information storage server into a format and sends the video data to the broadcast control cloud data distribution server.
The virtual reality server system as described above, wherein preferably, the cloud server further includes: and the website operation server is used for operating the cloud website.
Compared with the background art, the virtual reality server system provided by the embodiment of the application adopts a distributed transceiving mode, so that the network traffic load of each server is relatively small in the aspect of network transmission; when all servers run simultaneously, the load of each server can be effectively reduced, and the delay of the algorithm is reduced when various calculations are carried out; also in supporting various VR device video streams, because of the multi-server mode, data requests of different VR clients can be processed in different servers. Due to the fact that network load is small, delay is small, and different data requests can be supported, effective segmentation presentation of different roles can be achieved on the basis that VR display is not affected, and multi-user online virtual reality experience is achieved in the virtual reality technology.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic diagram of a virtual reality server system according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
The VR studio is used for digitally synthesizing a virtual three-dimensional scene made by a computer and character moving images shot by a camera on site in real time, and the character and the virtual background can be synchronously changed, so that seamless fusion of the character and the virtual background is realized, and a perfect synthesized picture is obtained.
One or more of a camera, a motion capture device, an expression capture device, a light capture device, a laser positioning device and the like are required to be arranged in the VR studio. The camera can be divided into a common camera or a depth camera; the common camera can collect common video information, namely video image information; the depth camera may obtain depth information, such as skeletal information of a character (actor).
In addition, the virtual reality server system provided in the embodiment of the present application needs to place some low-altitude cloud servers in the VR studio, and as shown in fig. 1, the low-altitude cloud servers include: one or more of a data acquisition input server 1, a camera pose server 2, a character segmentation service 3, a motion capture server 4, an expression capture server 5, a motion analysis server 6, and an acousto-optic synchronization server 7.
The data acquisition input server 1 collects data information of VR studios, for example: the video information shot by a camera of the VR studio or the depth information shot by a depth camera can be collected, the camera attitude information collected by the laser positioning equipment to the camera, the action information of the character captured by the action capturing equipment, the character expression information captured by the expression capturing equipment, the acousto-optic information captured by the light capturing equipment and the like can be collected.
And after the data acquisition input server 1 collects the data information of the VR studio, the data information is sent to a corresponding server. For example, if the collected information is directly the camera attitude information, the collected camera attitude information is sent to the camera attitude server 2; if the camera attitude information is not directly acquired but is the video information acquired by the camera, the acquired video information is sent to the camera attitude server 2, and the camera attitude server 2 identifies and positions the camera position of the VR studio through the video information or the camera picture information to obtain the camera attitude information; then the camera attitude server 2 can transmit the real camera attitude information captured at every moment to the VR client, and the virtual camera attitude in the engine of the VR client corresponds to the real camera attitude information one by one, so that the camera observation angles in the real environment and the virtual environment are consistent; or the camera attitude server 2 may also transmit the camera attitude information to the cloud server to store the data in the cloud server. The camera pose information transmitted here is a camera pose data stream.
In addition, after the character segmentation server 3 receives the data information sent by the data acquisition input server 1, the individual character information is obtained from the video information in the data information, for example: video acquisition in a VR studio is shot under a large green curtain environment, so that each role can be independently scratched after being divided, and the role information is the independent role information obtained by independently scratching and dividing the roles in the role dividing server 3 and then is sent to a VR client or a cloud server.
In addition, the motion capture server 4 receives data information sent by the data acquisition input server 1, the data information may include motion information of a character captured by the motion capture device, the motion capture server 4 may directly send the motion information of the character to the VR client, and the VR client plays the motion of a certain 3D character in real time by using a corresponding plug-in, or may also send the motion information to the cloud server for storage.
On the basis, in order to more vividly or comprehensively display the 3D roles to the VR client, an expression capture server 5 can be further arranged, and the expression capture server 5 obtains role expression information according to the data information. For example, a plurality of special infrared cameras may be arranged in a VR studio, character expression information is captured by special facial expression capture equipment, then the character expression information is transmitted to the data acquisition input server 1 as data information, then the data acquisition input server 1 sends the received character expression information to a VR client through an expression capture server, and the VR client uses corresponding plug-ins to control the expression of a certain 3D character in the client, or sends the expression to a cloud server for storage. As another embodiment, the expression capture server 5 may obtain the character expression information by capturing expressions in the video information collected by the data collection input server 1, and then send the character expression information to the VR client through the expression capture server, or send the character expression information to the cloud server for storage.
The VR studio can also be provided with a depth camera, bone information of a role is captured through the depth camera, the captured bone information is sent to the data acquisition input server 1 as data information, and then the data acquisition input server 1 sends the bone information of the role to the VR client or the cloud server through the motion analysis server 6. For example, bone information of a character is captured through a depth camera in a VR studio, the bone information of the character (i.e., the motion track of the character) is analyzed through a depth learning algorithm and an artificial intelligence algorithm, the bone information is matched with motion information in a motion database, a closest motion number is selected and sent to a VR client, and the VR client plays and displays a motion with the highest matching degree designed by art to a user according to the received motion number, or the selected motion number is sent to a cloud server for storage.
Because light, the sound in real environment and the virtual environment are very different, in order to can be more real expression light and sound at the VR client, catch light, sound in the real environment and map to the virtual environment in, reach virtual environment and real environment one-to-one, can also set up special capture equipment at the VR studio, can catch light and sound position, information such as light divergence direction. Then will catch light and sound information and send data information to data acquisition input server 1 as data information, data acquisition input server 1 need not send reputation to VR client or high in the clouds server through reputation synchronization server 7, the VR client uses corresponding plug-in components to show received reputation information, light and shade are unanimous when people in the real environment and the role in the virtual environment carry out the interdynamic this moment, make 3D model people in the virtual environment and real people more real in VR client 3D environment fuse.
Because the virtual reality server system provided in the embodiment of the application adopts a distributed transceiving mode, the network traffic load of each server is relatively small in the aspect of network transmission; when all servers run simultaneously, the load of each server can be effectively reduced, and the delay of the algorithm is reduced when various calculations are carried out; also in supporting various VR device video streams, because of the multi-server mode, data requests of different VR clients can be processed in different servers. Due to the fact that network load is small, delay is small, and different data requests can be supported, effective segmentation presentation of different roles can be achieved on the basis that VR display is not affected, and multi-user online virtual reality experience is achieved in the virtual reality technology.
The data stored in the cloud server is distributed and played, specifically, a broadcast control cloud data distribution server 8 can be set, and the broadcast control cloud data distribution server sends the data stored in the cloud server to the VR client through the low altitude cloud server. For example: there may be a camera pose server 2, a character segmentation service 3, a motion capture server 4, an expression capture server 5, a motion analysis server 6, and a sound-light synchronization server 7.
For example, when a large number of users request video data, the broadcast control cloud data distribution server 8 can be effectively utilized, and each low-altitude cloud server is responsible for sending the video data to the corresponding VR client and VR all-in-one machine user in controllable connection, so that millions or even tens of millions of users can simultaneously watch video streams sent by the VR studio in real time on line through the plurality of low-altitude cloud servers.
On the basis of the foregoing embodiment, the cloud server may specifically include: a website operation server 9, a video stream conversion server 10, an information storage server 11 and a special format packet forwarding server 12. The website operation server 9 is used for operating a cloud website; the video stream conversion server 10 is used for encoding and converting the video data stored in the information storage server 11 into a format, and sending the video data to the broadcast control cloud data distribution server 8; the dedicated format encapsulation server 12 is used for converting the non-video data stored in the information storage server 11 into a format, for example, an Mp4 file or an rmvb file, and sending the converted data to the broadcast control cloud data distribution server 8.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (9)

1. A virtual reality server system, comprising: set up in the low latitude cloud server in VR studio, the low latitude cloud server includes: the system comprises a data acquisition input server, a camera attitude server, a role segmentation server and a motion capture server;
the data acquisition input server collects data information of the VR studio and sends the data information to a corresponding server;
the camera attitude server obtains camera attitude information of the VR studio according to the data information and sends the camera attitude information to the VR client or the cloud server;
the role segmentation server performs image matting and segmentation on the video information in the data information to obtain independent role information, and sends the independent role information to a VR client or a cloud server;
the motion capture server obtains motion information of the role of the VR studio according to the data information and sends the motion information to the VR client or the cloud server;
further comprising: and the broadcast control cloud data distribution server sends the data stored by the cloud server to the VR client through the low-altitude cloud server.
2. The virtual reality server system of claim 1, wherein the camera pose server identifies and locates a camera position of the VR studio by video information in the data information to obtain camera pose information.
3. The virtual reality server system of claim 1, wherein the low altitude cloud server further comprises: and the expression capturing server acquires the role expression information according to the data information and sends the role expression information to the VR client or the cloud server.
4. The virtual reality server system of claim 1, wherein the low altitude cloud server further comprises: and the motion analysis server acquires the skeleton information of the role according to the depth video information in the data information and sends the skeleton information to the VR client or the cloud server.
5. The virtual reality server system of claim 4, wherein the motion analysis server obtains skeleton information of the character, matches the skeleton information with motion information in a motion database, and sends a motion information number matched in the motion database to the VR client or the cloud server.
6. The virtual reality server system of claim 1, wherein the low altitude cloud server further comprises: and the acousto-optic synchronous server acquires acousto-optic information of the VR studio according to the data information and sends the acousto-optic information to the VR client or the cloud server.
7. The virtual reality server system of any one of claims 1-6, wherein the cloud server comprises: the system comprises a special format packet forwarding server and an information storage server, wherein the special format packet server converts the format of non-video data stored by the information storage server and sends the non-video data to a broadcast control cloud data distribution server.
8. The virtual reality server system of claim 7, wherein the cloud server further comprises: and the video stream conversion server encodes and converts the video data stored by the information storage server into a format and sends the video data to the broadcast control cloud data distribution server.
9. The virtual reality server system of claim 8, wherein the cloud server further comprises: and the website operation server is used for operating the cloud website.
CN201810885309.1A 2018-08-06 2018-08-06 Virtual reality server system Active CN108989327B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810885309.1A CN108989327B (en) 2018-08-06 2018-08-06 Virtual reality server system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810885309.1A CN108989327B (en) 2018-08-06 2018-08-06 Virtual reality server system

Publications (2)

Publication Number Publication Date
CN108989327A CN108989327A (en) 2018-12-11
CN108989327B true CN108989327B (en) 2021-04-02

Family

ID=64555711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810885309.1A Active CN108989327B (en) 2018-08-06 2018-08-06 Virtual reality server system

Country Status (1)

Country Link
CN (1) CN108989327B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290290A (en) * 2019-06-21 2019-09-27 深圳迪乐普数码科技有限公司 Implementation method, device, computer equipment and the storage medium of the studio cloud VR

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5841980A (en) * 1996-05-15 1998-11-24 Rtime, Inc. Distributed system for communication networks in multi-user applications
CN101192308A (en) * 2007-03-28 2008-06-04 腾讯科技(深圳)有限公司 Roles animations accomplishing method and system
KR20100009947A (en) * 2008-07-21 2010-01-29 삼성전자주식회사 Apparatus and method for mutual connection of virtual reality services
KR101312268B1 (en) * 2010-12-24 2013-09-25 주식회사 케이티 Method, cloud computing server, and cloud computing system for providing game service in cloud computing environment
US9721427B2 (en) * 2014-09-23 2017-08-01 Bally Gaming, Inc. System and method for positionally accurate gaming content
CN104618336B (en) * 2014-12-30 2018-05-18 广州酷狗计算机科技有限公司 A kind of account management method, equipment and system
CN105208458B (en) * 2015-09-24 2018-10-02 广州酷狗计算机科技有限公司 Virtual screen methods of exhibiting and device
CN205581785U (en) * 2016-04-15 2016-09-14 向京晶 Indoor virtual reality interactive system of many people
CN106504120A (en) * 2016-11-08 2017-03-15 国网上海市电力公司 Virtual reality Production Managementsystem For Electricpower Network
CN107438077A (en) * 2017-08-15 2017-12-05 合肥爱吾宠科技有限公司 The internet game method of mobile communication terminal

Also Published As

Publication number Publication date
CN108989327A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
RU2621644C2 (en) World of mass simultaneous remote digital presence
US10650590B1 (en) Method and system for fully immersive virtual reality
KR101713772B1 (en) Apparatus and method for pre-visualization image
TWI752502B (en) Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof
CN105939481A (en) Interactive three-dimensional virtual reality video program recorded broadcast and live broadcast method
CN110602517B (en) Live broadcast method, device and system based on virtual environment
CN102340690A (en) Interactive television program system and realization method
Normand et al. Full body acting rehearsal in a networked virtual environment—A case study
WO2018222500A1 (en) Methods and systems for customizing virtual reality data
CN108961368A (en) The method and system of real-time live broadcast variety show in three-dimensional animation environment
CN115118880A (en) XR virtual shooting system based on immersive video terminal is built
US20220139050A1 (en) Augmented Reality Platform Systems, Methods, and Apparatus
CN108989327B (en) Virtual reality server system
KR20210084248A (en) Method and apparatus for providing a platform for transmitting vr contents
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
KR20160136160A (en) Virtual Reality Performance System and Performance Method
CN116744027A (en) Meta universe live broadcast system
CN103198519A (en) Virtual character photographic system and virtual character photographic method
JP2020162084A (en) Content distribution system, content distribution method, and content distribution program
CN112423035A (en) Method for automatically extracting visual attention points of user when watching panoramic video in VR head display
CN206757536U (en) More people's virtual reality interactive systems
CN111063034B (en) Time domain interaction method
WO2023032085A1 (en) Video transmission system, terminal device, and video transmission method
Fadzli et al. Compression in Dynamic Scene Tracking and Moving Human Detection for Life-Size Telepresence
IL289178A (en) Advanced multimedia system for analysis and accurate emulation of live events

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100007 101, 1st floor, building 3, No.2, zangjingguan Hutong, Dongcheng District, Beijing

Applicant after: HENGXIN SHAMBALA CULTURE Co.,Ltd.

Address before: 100097 North District, 11 / F, Newton office area, 25 lantianchang South Road, Haidian District, Beijing

Applicant before: HENGXIN SHAMBALA CULTURE Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant