CN112019921A - Body motion data processing method applied to virtual studio - Google Patents
Body motion data processing method applied to virtual studio Download PDFInfo
- Publication number
- CN112019921A CN112019921A CN202010903697.9A CN202010903697A CN112019921A CN 112019921 A CN112019921 A CN 112019921A CN 202010903697 A CN202010903697 A CN 202010903697A CN 112019921 A CN112019921 A CN 112019921A
- Authority
- CN
- China
- Prior art keywords
- rendering server
- limb
- virtual
- control workstation
- rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Abstract
A limb action data processing method applied to a virtual studio includes: the large screen is connected to a rendering server, and the rendering server is connected to the master workstation; setting N Leyard LED large screens to be in a vertical synchronous consistent mode through a hardware synchronization card; setting a display refreshing mode of a rendering server as a software refreshing rate; setting a master control workstation and a rendering server of a rendering channel into a complete frame locking mode, and controlling the stepping of the next frame by the master control workstation; capturing the limb movement of a target through a limb movement capturing device, and sending the captured limb movement to a master control workstation through the limb movement capturing device; when the main control workstation determines that the limb action of the target changes, the main control workstation sends the variable quantity of the virtual graph corresponding to the target to each rendering server; and after determining each rendering server, the master control workstation controls each rendering server to synchronously output.
Description
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a limb action data processing method applied to a virtual studio.
Background
With the development of digital technology, more and more movies/television programs are recorded in virtual scenes. Taking a virtual studio widely applied in the field of broadcast television as an example, a three-dimensional virtual reality technology is used for making virtual scenes and virtual animation characters which enable people to be personally on the scene, and real actors and the virtual animation characters can interact with each other on the same station, so that the entertainment of programs is greatly increased; therefore, the program making cost can be greatly saved, and the program making efficiency is improved.
Most of current virtual studios adopt a green curtain mode, the green curtain mode can add virtual scenes and/or characters into an area of the green curtain through software during post-processing, and the virtual studios can be completed through a single machine, so that the cost is low. However, when recording, real actors can only see virtual scenes and/or characters through pointing or reverse-looking screens, which causes very difficult operation of program recording, very poor scene sense, and very long post-synthesis processing time, resulting in high actual recording cost.
Disclosure of Invention
Aiming at the problems existing in the prior art when the green curtain mode is adopted for recording, the embodiment of the invention provides a limb action data processing method applied to a virtual studio, which can better process the limb action data of the virtual studio.
In order to solve the above problem, an embodiment of the present invention provides a method for processing limb movement data applied to a virtual studio, including:
connecting N Leyard LED large screens to N rendering servers, and connecting the N rendering servers to a master control workstation to form a CAVE simulation environment space; connecting a limb motion capture device to a master workstation;
setting the N Leyard LED large screens to be in a vertical synchronous consistent mode through a hardware synchronization card; setting a display refreshing mode of the rendering server as a software refreshing rate; setting the master control workstation and a rendering server of the rendering channel to be in a complete frame locking mode, and controlling the stepping of the next frame by the master control workstation;
capturing a limb motion of a target through the limb motion capture device, wherein the limb motion capture device sends the captured limb motion to the master control workstation; wherein the limb motion capture device comprises 8 OptiTrack infrared cameras; the 8 OptiTrack infrared cameras are respectively arranged at different angles so as to capture the limb actions of the target through the 8 OptiTrack infrared cameras;
when the main control workstation determines that the limb action of the target changes, the main control workstation sends the variable quantity of the virtual graph corresponding to the target to each rendering server;
and after determining each rendering server, the master control workstation controls each rendering server to synchronously output.
In some embodiments, the master workstation determines whether the target's limb movement has changed by:
after receiving the limb action data sent by the limb action capturing device, the master control workstation compares the limb action data with the previously received limb action data to determine whether the target limb action is changed;
if yes, the state identification of the virtual graph corresponding to the target is modified into the changed state.
In some embodiments, the amount of change in the virtual graphics corresponding to the target is sent to each of the rendering servers, including:
and the master control workstation generates an indication message according to the variable quantity of the virtual graph of the target and sends the indication message to each rendering server in a broadcasting mode.
In some embodiments, the method further comprises:
and after receiving the broadcasted indication message, each rendering server correspondingly changes the limb actions of the virtual graphics, returns a response message for confirming that the change is completed to the master control workstation, and switches to a standby state.
In some embodiments, the method further comprises:
and after confirming that the response message of each rendering server is received, the master control workstation sends a switching instruction to each rendering server in a broadcasting mode so as to instruct each rendering server to synchronously display the limb action of the virtual graph.
In some embodiments, the method further comprises:
and each rendering server fuses the virtual graph corresponding to the target with a preset virtual three-dimensional background to form an image displayed on a Leyard LED large screen.
In some embodiments, each rendering server is connected with the master workstation through a video synchronization card, so that hardware synchronous display is realized through the video synchronization card of each rendering server.
And displaying the limb actions of the virtual graphics synchronously with the master control workstation through standby of the rendering server.
The technical scheme of the invention has the following beneficial effects: the technical scheme provides a limb motion data processing method applied to a virtual studio, which can capture the limb motion of a target through a limb motion capture device, then perform rendering through a plurality of hardware-synchronous rendering servers, and realize the playing of virtual reality through a plurality of display devices.
Drawings
Fig. 1 is a schematic diagram of the mode of operation of the method of an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a limb action data processing method applied to a virtual studio, which adopts a CAVE immersion environment consisting of a plurality of LED large screens, so that a host can be in the immersion environment to visually see virtual scenes and virtual animation characters, can very easily interact with the virtual animation characters, and particularly can visually see the limb actions of the virtual characters. The technical scheme of the embodiment of the invention not only greatly increases the interest of interaction, but also greatly increases the efficiency of recording programs, and the recorded pictures basically do not need too much post-processing, thereby greatly reducing the cost of recording programs.
In the above technical solution, since each LED screen is configured with an independent computer, the virtual pictures on all the LEDs must be seamlessly connected and synchronized. In order to drive the limb actions of the virtual animation character by the limb action data acquired from the third-party limb action capturing hardware equipment and enable the limb actions of the virtual animation character to be smoothly and synchronously displayed in a complete picture formed by splicing a plurality of LED large screens, the limb actions of the cross-screen virtual character can be completely and correctly displayed.
The technical scheme of the embodiment of the invention is based on an AR virtual simulation environment, so that a display basic environment needs to be established at first, and display hardware under the environment is configured into N Leyard LED large screens, N channel display workstations and a main control workstation. When the method is realized, limb actions are captured through a set of equipment, specifically, the target limb action image can be captured by using 8 OptiTrack infrared cameras, and the limb actions are acquired. As shown in FIG. 1, N large screens of Leyard LEDs are spliced into a CAVE simulation environment display space, each large screen of Leyard LEDs is connected with a rendering server for displaying, and the N rendering servers are configured to display synchronously. Wherein the device for capturing limb motion is connected to the master workstation.
When the AR simulation environment software is started, the performer simulating the animated character enters an image capturing area of the device capturing the body movement, and starts simulating the animated character to perform various performances. At the moment, after the device for capturing the limb movement captures the movement of the performer, the captured image is sent to the main control workstation, and then the multi-channel system accurately displays the limb picture on N rendering servers in a hardware-level synchronous mode.
Specifically, the method comprises the following steps:
(1) setting the large screens of the N LEDs to be in a vertical synchronous consistent mode through a hardware synchronous card;
(2) setting a display refreshing mode in each rendering channel slave machine to set a vertical refreshing rate as a software refreshing rate;
(3) setting a master control workstation and a rendering server of a rendering channel into a complete frame locking mode, and controlling the stepping of the next frame by a host workstation;
(4) when the main control workstation receives the flow data of the limb movement motion capture equipment, the flow data is compared with the flow data received last time, and if the flow data is modified, the state of the corresponding limb virtual Marker is set to be modified.
If the detected state is in the modification mode in the process of updating the virtual Marker state, compressing the message of the variable quantity of the virtual Marker and broadcasting the message to the slave machine of the rendering channel through the network;
the rendering channel slave machine receives the virtual Marker information and transmits the information to a rendering processor to update the state of the corresponding body skeleton of the Marker, and then sends a message to inform the host machine that rendering preparation is finished and the rendering channel slave machine enters a waiting state;
after the master control machine confirms that all rendering is prepared successfully, the slave machine is informed to output immediately by the broadcast message;
and the rendering slave calls a hardware synchronization card to match with the final picture output, so that the role limb actions reaching the display hardware level are rendered synchronously.
The problem to be solved by the embodiment of the invention is that the local host and the remote host cannot see the interactive picture of the same frame of the two parties, which is not beneficial to interaction. The solution adopted by the embodiment of the invention is that the remote video is subjected to three-dimensional rendering and is displayed through the local LED screen, and then the local host sees the immersive remote picture through the LED screen, and the remote host can see the synchronous display of the local host and the remote scene through the shooting of the local camera, so that the remote picture is subjected to three-dimensional rendering and displaying locally.
When in work:
setting a multi-surface LED screen into a vertical synchronous consistent mode by matching a hardware synchronous card;
setting a display refresh mode in a rendering channel server as a vertical refresh rate as a software refresh rate; wherein the software refresh rate refers to the rate of software updating pictures;
setting the main control workstation and the rendering channel server in a stepping synchronous mode, and controlling the stepping of the next frame by the main control workstation; the stepping synchronization mode means that the display of all rendering servers advances synchronously, and the content of the next picture can be updated only after the current pictures of all LEDs are displayed;
the main control workstation receives the remote live stream and pushes the live stream data into a stream pushing processing module;
the stream pushing processing module broadcasts a live stream in a local area in a UDP (user Datagram protocol) form;
the rendering channel server receives the live stream, then sends a message to inform the master control workstation of finishing rendering preparation and enters a waiting state; the LED large screen on site is different areas of the same picture;
after the master control workstation confirms that all rendering is prepared successfully, the slave computer is informed to output immediately by the broadcast message; and the rendering slave calls a hardware synchronization card to match with the final picture output, so that the synchronous rendering of the live stream at the hardware level is realized.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (7)
1. A limb action data processing method applied to a virtual studio is characterized by comprising the following steps:
connecting N Leyard LED large screens to N rendering servers, and connecting the N rendering servers to a master control workstation to form a CAVE simulation environment space; connecting a limb motion capture device to a master workstation;
setting the N Leyard LED large screens to be in a vertical synchronous consistent mode through a hardware synchronization card; setting a display refreshing mode of the rendering server as a software refreshing rate; setting the master control workstation and a rendering server of the rendering channel to be in a complete frame locking mode, and controlling the stepping of the next frame by the master control workstation;
capturing a limb motion of a target through the limb motion capture device, wherein the limb motion capture device sends the captured limb motion to the master control workstation; wherein the limb motion capture device comprises 8 OptiTrack infrared cameras; the 8 OptiTrack infrared cameras are respectively arranged at different angles so as to capture the limb actions of the target through the 8 OptiTrack infrared cameras;
when the main control workstation determines that the limb action of the target changes, the main control workstation sends the variable quantity of the virtual graph corresponding to the target to each rendering server;
and after determining each rendering server, the master control workstation controls each rendering server to synchronously output.
2. The method as claimed in claim 1, wherein the master workstation determines whether the physical movement of the target is changed by:
after receiving the limb action data sent by the limb action capturing device, the master control workstation compares the limb action data with the previously received limb action data to determine whether the target limb action is changed;
if yes, the state identification of the virtual graph corresponding to the target is modified into the changed state.
3. The method as claimed in claim 1, wherein the sending the variation of the virtual graphics corresponding to the target to each rendering server comprises:
and the master control workstation generates an indication message according to the variable quantity of the virtual graph of the target and sends the indication message to each rendering server in a broadcasting mode.
4. The method of limb motion data processing applied to a virtual studio according to claim 3, further comprising:
and after receiving the broadcasted indication message, each rendering server correspondingly changes the limb actions of the virtual graphics, returns a response message for confirming that the change is completed to the master control workstation, and switches to a standby state.
5. The method of limb motion data processing applied to a virtual studio according to claim 4, further comprising:
and after confirming that the response message of each rendering server is received, the master control workstation sends a switching instruction to each rendering server in a broadcasting mode so as to instruct each rendering server to synchronously display the limb action of the virtual graph.
6. The method of limb motion data processing applied to a virtual studio according to claim 5, further comprising:
and each rendering server fuses the virtual graph corresponding to the target with a preset virtual three-dimensional background to form an image displayed on a Leyard LED large screen.
7. The method as claimed in claim 1, wherein each rendering server is connected to the host workstation via a video synchronization card, so as to implement hardware-synchronized display via the video synchronization card of each rendering server.
And displaying the limb actions of the virtual graphics synchronously with the master control workstation through standby of the rendering server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010903697.9A CN112019921A (en) | 2020-09-01 | 2020-09-01 | Body motion data processing method applied to virtual studio |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010903697.9A CN112019921A (en) | 2020-09-01 | 2020-09-01 | Body motion data processing method applied to virtual studio |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112019921A true CN112019921A (en) | 2020-12-01 |
Family
ID=73517110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010903697.9A Pending CN112019921A (en) | 2020-09-01 | 2020-09-01 | Body motion data processing method applied to virtual studio |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112019921A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114501054A (en) * | 2022-02-11 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Live broadcast interaction method, device, equipment and computer readable storage medium |
CN114900678A (en) * | 2022-07-15 | 2022-08-12 | 北京蔚领时代科技有限公司 | VR end-cloud combined virtual concert rendering method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049926A (en) * | 2012-12-24 | 2013-04-17 | 广东威创视讯科技股份有限公司 | Distributed three-dimensional rendering system |
CN105704419A (en) * | 2014-11-27 | 2016-06-22 | 程超 | Method for human-human interaction based on adjustable template profile photos |
CN106251396A (en) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | The real-time control method of threedimensional model and system |
CN110018874A (en) * | 2019-04-09 | 2019-07-16 | Oppo广东移动通信有限公司 | Vertical synchronization method, apparatus, terminal and storage medium |
CN110267028A (en) * | 2019-06-24 | 2019-09-20 | 中冶智诚(武汉)工程技术有限公司 | A kind of signal synchronous display system for five face LED-CAVE |
US10529113B1 (en) * | 2019-01-04 | 2020-01-07 | Facebook Technologies, Llc | Generating graphical representation of facial expressions of a user wearing a head mounted display accounting for previously captured images of the user's facial expressions |
CN210021183U (en) * | 2019-05-09 | 2020-02-07 | 浙江棱镜全息科技有限公司 | Immersive interactive panoramic holographic theater and performance system |
-
2020
- 2020-09-01 CN CN202010903697.9A patent/CN112019921A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049926A (en) * | 2012-12-24 | 2013-04-17 | 广东威创视讯科技股份有限公司 | Distributed three-dimensional rendering system |
CN105704419A (en) * | 2014-11-27 | 2016-06-22 | 程超 | Method for human-human interaction based on adjustable template profile photos |
CN106251396A (en) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | The real-time control method of threedimensional model and system |
US10529113B1 (en) * | 2019-01-04 | 2020-01-07 | Facebook Technologies, Llc | Generating graphical representation of facial expressions of a user wearing a head mounted display accounting for previously captured images of the user's facial expressions |
CN110018874A (en) * | 2019-04-09 | 2019-07-16 | Oppo广东移动通信有限公司 | Vertical synchronization method, apparatus, terminal and storage medium |
CN210021183U (en) * | 2019-05-09 | 2020-02-07 | 浙江棱镜全息科技有限公司 | Immersive interactive panoramic holographic theater and performance system |
CN110267028A (en) * | 2019-06-24 | 2019-09-20 | 中冶智诚(武汉)工程技术有限公司 | A kind of signal synchronous display system for five face LED-CAVE |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114501054A (en) * | 2022-02-11 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Live broadcast interaction method, device, equipment and computer readable storage medium |
CN114501054B (en) * | 2022-02-11 | 2023-04-21 | 腾讯科技(深圳)有限公司 | Live interaction method, device, equipment and computer readable storage medium |
CN114900678A (en) * | 2022-07-15 | 2022-08-12 | 北京蔚领时代科技有限公司 | VR end-cloud combined virtual concert rendering method and system |
CN114900678B (en) * | 2022-07-15 | 2022-09-30 | 北京蔚领时代科技有限公司 | VR end-cloud combined virtual concert rendering method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111698390B (en) | Virtual camera control method and device, and virtual studio implementation method and system | |
CN106789991B (en) | Multi-person interactive network live broadcast method and system based on virtual scene | |
US9751015B2 (en) | Augmented reality videogame broadcast programming | |
US20180048876A1 (en) | Video Capture System Control Using Virtual Cameras for Augmented Reality | |
CN108900857B (en) | Multi-view video stream processing method and device | |
US8885022B2 (en) | Virtual camera control using motion control systems for augmented reality | |
CN113115110A (en) | Video synthesis method and device, storage medium and electronic equipment | |
CN112543344B (en) | Live broadcast control method and device, computer readable medium and electronic equipment | |
CN112019921A (en) | Body motion data processing method applied to virtual studio | |
CN113473207A (en) | Live broadcast method and device, storage medium and electronic equipment | |
CN114845136B (en) | Video synthesis method, device, equipment and storage medium | |
WO2023279793A1 (en) | Video playing method and apparatus | |
CN114125301B (en) | Shooting delay processing method and device for virtual reality technology | |
WO2009078909A1 (en) | Virtual object rendering system and method | |
CN110730340A (en) | Lens transformation-based virtual auditorium display method, system and storage medium | |
CN112019922A (en) | Facial expression data processing method applied to virtual studio | |
CN112261422A (en) | Simulation remote live broadcast stream data processing method suitable for broadcasting and television field | |
CN107888890A (en) | It is a kind of based on the scene packing device synthesized online and method | |
CN113315885B (en) | Holographic studio and system for remote interaction | |
CN113259544A (en) | Remote interactive holographic demonstration system and method | |
CN113099281A (en) | Video interaction method and device, storage medium and terminal | |
CN113259602B (en) | Method and system for scheduling interactive holographic video | |
CN116708867B (en) | Live broadcast data processing method, device, equipment and storage medium | |
CN112203101B (en) | Remote video live broadcast method and device and electronic equipment | |
CN207652589U (en) | It is a kind of based on the scene packing device synthesized online |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201201 |