CN115209172A - XR-based remote interactive performance method - Google Patents

XR-based remote interactive performance method Download PDF

Info

Publication number
CN115209172A
CN115209172A CN202210819362.8A CN202210819362A CN115209172A CN 115209172 A CN115209172 A CN 115209172A CN 202210819362 A CN202210819362 A CN 202210819362A CN 115209172 A CN115209172 A CN 115209172A
Authority
CN
China
Prior art keywords
meeting place
branch
main
place
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210819362.8A
Other languages
Chinese (zh)
Other versions
CN115209172B (en
Inventor
王炜
谢超平
姚仕元
张琪浩
罗天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sobey Digital Technology Co Ltd
Original Assignee
Chengdu Sobey Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sobey Digital Technology Co Ltd filed Critical Chengdu Sobey Digital Technology Co Ltd
Priority to CN202210819362.8A priority Critical patent/CN115209172B/en
Publication of CN115209172A publication Critical patent/CN115209172A/en
Application granted granted Critical
Publication of CN115209172B publication Critical patent/CN115209172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4383Accessing a communication channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses an XR-based method for remote interactive performance, which belongs to the field of content production and comprises the following steps: a1: making a virtual background; a2: planning an area to be spliced; a3: rendering by a main meeting place and a branch meeting place XR; a4: shooting a real false scene for the virtual-real fusion of the main meeting place, and shooting a real false scene for the virtual-real fusion of the L screen; a5: mixing the picture of the camera in the branch meeting place with the voice of the singer in the branch meeting place, and then carrying out coding transmission; a6: the main meeting place receives the streams transmitted from the branch meeting places, and splicing and synthesizing pictures after decoding; a7: and performing VR implantation on the spliced video, and then rendering, outputting and playing. The invention realizes the cross-space vivid performance that performers in the main meeting place can interact with each other physically, connect with the wheat and interact with audiences, supports shot switching during XR performance, increases immersive experience of the audience while drawing the distance between the artistic performance and the audiences in different places, and presents cool effect by digital creative content.

Description

XR-based remote interactive performance method
Technical Field
The invention relates to the field of content production, in particular to an XR-based remote interactive performance method.
Background
The existing video real-time connecting lines comprise art programs, news connecting lines, video conferences, live broadcasting and wheat connecting and the like. The visual window is opened, the visual window is similar to a video conference, and for performance activities, fusion between people and scenes is not considered, so that the reality sense is not strong, and the immersive experience is poor.
XR (Extended reality) is a term used to describe the environment or interaction behavior that combines virtual and real elements. While XR is generally considered to include AR (augmented reality), VR (virtual reality), MR (mixed reality), for the content production domain: platform technologies such as a camera tracking system, a virtual studio software platform, a media server, a real-time rendering engine and the like are utilized to place performers in a virtual world in real time without a virtual production mode of a green screen and a post-production process.
In XR production, a high quality LED screen (usually presented as a circular or spherical screen) simultaneously displays a 3D virtual environment preloaded by a graphics engine, with the LEDs simultaneously constituting the background for a movie or live event. When an accurate camera tracking system is added, the camera begins to move around seamlessly integrated real and virtual elements within the LED environment, creating a fused immersive illusion for viewers looking through the camera view. The immersive experience is strong, but currently, XR cannot realize remote collaborative production such as connecting with wheat, and is widely used for recording programs at present, and switching of multi-camera lenses is difficult to perform.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an XR-based method for remote interactive performance, can realize multi-place linkage cloud performance, does not need to be connected with a traditional broadcast television window, but realizes cross-space vivid performance that performers in a main meeting place can interact with each other physically, connect with each other and interact with audiences, supports lens switching during XR performance, increases immersive experience of the performance while drawing the distance between the artistic performance and the audiences in different places, and presents cool effect by digital creative content.
The purpose of the invention is realized by the following scheme:
an XR-based method for remote interactive performance is characterized in that after XR systems are respectively set up in a main meeting place and a branch meeting place, the following interactive performance processes of the main meeting place and the branch meeting place are executed:
step A1: making a virtual background for background pictures of performers in the main meeting place and the branch meeting place;
step A2: planning a region to be spliced, and splitting the virtual background into two parts, wherein one part is a picture part displayed by a main meeting place, and the other part is a picture displayed by an L screen of a branch meeting place;
step A3: b, rendering by the main meeting place and the branch meeting place XR, respectively rendering and mapping the virtual background split in the step A2 on a back screen of the main meeting place and an L screen of the branch meeting place, and keeping the pictures of the main meeting place synchronous through time codes and frame alignment;
step A4: the main meeting place camera carries out main meeting place virtual-real fusion to shoot real virtual scenery, and the branch meeting place camera is responsible for shooting real virtual scenery through L-screen virtual-real fusion;
step A5: mixing the picture of the camera in the branch meeting place with the voice of the singer in the branch meeting place, and then carrying out coding transmission;
step A6: the main meeting place receives the streams transmitted from the branch meeting places, decodes the streams, and carries out splicing and synthesis on pictures by the all-in-one machine to form complete pictures which are monitored by the all-in-one machine;
step A7: and performing VR implantation on the spliced video, rendering and outputting, adopting an RTC mode, pushing the video to the cloud by the integrated machine, playing by a player in the branch meeting place, displaying a picture on a large screen in the branch meeting place, and playing audio by a plurality of sound boxes.
Further, the method also comprises the following steps of executing a wheat connecting production process of the main meeting place and the branch meeting place:
step B1: singers in the branch conference places of the main conference place sing;
and step B2: step A5, synthesizing the sound and picture of the conference hall, and splicing the part with the picture of the main conference hall;
and step B3: the main meeting place receives the video and audio streams transmitted from the branch meeting places, decodes the video and audio streams, and the integrated machine is responsible for synchronously splicing the audio streams, outputting complete songs and carrying out sound and picture synchronous processing;
and step B4: the subsequent transmission and playback steps are the same as in the first part.
Further, the method also comprises the following audience interaction process of the main meeting place and the branch meeting place:
step C1: shooting auditorium of the branch meeting place by a camera, collecting video and audio, and transmitting the video and audio to a main meeting place after encoding;
and step C2: the main meeting place decodes the stream transmitted by the C1, XR processing is carried out on the video picture, and the video picture is rendered and output to an L-shaped screen consisting of a front large screen and a ground screen to form a naked eye 3D audience effect;
and C3: the audio obtained by decoding is sent to a sound console of a main meeting place, and a singer hears the sound of the audience side of the branch meeting place through an ear return system;
and C4: the camera lens switching position faces to the front large screen, and the singer in the main conference place and the audience in the branch conference place on the front large screen are shot together to form the singing effect of the singer in the main conference place in the audience in the branch conference place;
and C5: the microphone is used for collecting the voice of the singer in the main meeting place, and the picture shot by the camera is synchronized with the voice through the director all-in-one machine.
And C6: the player of the branch meeting place decodes and broadcasts the sound in an RTC mode, the large screen displays the singing effect of the singer of the meeting place in the audience of the branch meeting place, and the loudspeaker box of the branch meeting place plays the sound of the main meeting place, thereby realizing virtual picture interaction and sound two-way interaction.
Further, in step A4, the sub-step of: and taking the position angle information of the camera shot by the main meeting place as the standard, wherein the position information of the cameras in the branch meeting places is consistent with that of the camera in the main meeting place.
Further, in step A5, the sub-step of: the camera for shooting the virtual performance of the branch meeting place is kept consistent with the camera position of the main meeting place, if the switching between the lenses is involved, the camera position of the main meeting place only shoots the lens of the actor of the main meeting place in the tangential direction, and at the moment, the lens of the branch meeting place is kept consistent with the next lens to be cut in the main meeting place.
Further, in step A6, the complete picture is the main meeting place actor and the branch meeting place actor and the complete background.
Further, in step A7, the sub-steps of: and synthesizing the video of the main meeting place and the video transmitted from the branch meeting places to form the effect of performing the remote actors in the same virtual scene.
Further, in step B1, the sub-step of: a complete song is segmented in advance, and singers in the main scoring place sing the segments belonging to the complete song respectively; the singer in the main and branch conference places wears the earmuffs to monitor the accompaniment and the sound of the singer.
Further, in step B3, the complete song is an accompaniment and main and branch venue singers.
Further, in step C3, the singer hears the sounds including shouting aloud, cheering and applause on the audience side of the branch hall through the ear return system.
The beneficial effects of the invention include:
according to the embodiment of the invention, two sets of XR systems are built in the main meeting place and the branch meeting places, the depth fusion of remote performers, on-site performers and performance scenes is realized by adopting a splicing mode, and different from the traditional connection of windows, the cross-space realistic performance that performers in the main meeting place can interact with limbs, connect with wheat and interact with audiences is realized, and the lens switching during XR performance is supported.
The embodiment of the invention innovating the interaction method of the performer in the main meeting place and the audience in the branch meeting place, the audience in the branch meeting place can feel that the performer in the main meeting place enters the auditorium from the screen, the distance between the performer in the main meeting place and the audience in different places is shortened, and the immersive experience of the branch meeting place is enhanced.
The embodiment of the invention provides an XR-based method for remote interactive performance, which can realize multi-place linkage cloud performance, realize cross-space vivid performance that performers in a main meeting place can interact with each other physically, connect with each other and interact with audiences without connecting with a traditional broadcast television window, support lens switching during XR performance, increase immersive experience of the performance while drawing the distance between the artistic performance and the audiences in different places, and show a cool effect by digital creative content.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an interactive performance of a main meeting place according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of audience interaction at a main meeting place in an embodiment of the invention.
Detailed Description
All features disclosed in all embodiments of the present specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
In the detailed implementation manner, as one of the embodiments of the present invention, the present invention includes an interactive performance flow of a main meeting place and a branch meeting place: the interactive performance of the main meeting place under the scene is interaction for performers, including fusion of pictures and connection of sounds. The main meeting place is provided with a back screen and a front screen, the back screen comprises two side screens, a back screen and a ground screen, and the front screen comprises a front large screen; a large LED screen is arranged in the meeting place, and virtual scene performance of the meeting place adopts an L screen, a back screen and a ground screen. And splicing the pictures of the main meeting place and the branch meeting place. The method specifically comprises the following steps:
step A1: and making a virtual background for a background picture of the performance of the performer at the main meeting place.
Step A2: planning an area to be spliced, and splitting the virtual background into two parts, wherein one part is a picture part displayed by the main meeting place, and the other part is a picture displayed by an L screen of the branch meeting place. Utilize L type screen and realize certain 3D effect through XR, be favorable to strengthening the synthetic integrative sensation of main meeting place picture, compare ordinary single face screen, then be ordinary 2D picture in camera shooting, embedding then has obvious depth inconsistent problem in the main meeting place background, this step has solved this technical problem.
Step A3: and D, rendering by the main meeting place XR, rendering and mapping the virtual background split in the step A2 on a back screen of the main meeting place and an L screen of the meeting place respectively, and keeping the pictures of the main meeting place synchronous through time code + frame alignment.
Step A4: in order to ensure the consistency of picture splicing, the embodiment of the invention takes the position angle information of the main meeting place shooting camera as the standard, and the branch meeting place cameras need to ensure the consistency with the position information of the main meeting place camera.
Step A5: and mixing the picture of the camera in the branch meeting place with the voice of the singer in the branch meeting place, and then carrying out coding transmission.
In the step, a camera for shooting the virtual performance of the branch meeting place is kept consistent with a camera position of a main meeting place so as to solve the problem of parallax after splicing, if the switching between the shots is involved, because the camera needs a certain time to keep synchronization, and the requirement of direct broadcasting is considered, the embodiment of the invention firstly selects to shoot the shot of the main meeting place actor only in the tangential direction of the main meeting place, and at the moment, the shot of the branch meeting place is kept consistent with the shot to be cut next in the main meeting place.
Step A6: the main meeting place receives the flow transmitted from the branch meeting place, decodes the flow, and carries out splicing synthesis of the picture by the integrated machine to form a complete picture (actor of the main meeting place, actor of the branch meeting place and complete background), and the picture is monitored by the integrated machine.
Step A7: and performing VR implantation on the spliced videos, rendering and outputting the videos, adopting an RTC mode, pushing the videos to the cloud by the integrated machine, playing the videos by a player of the branch meeting place, displaying the pictures on a large screen of the branch meeting place, playing audio by a plurality of sound boxes, and synthesizing the video of the main meeting place and the video transmitted by the branch meeting place to form the effect of performance of the actors in the same virtual scene at different places.
In a specific implementation manner, as another embodiment of the present invention, the present invention includes a manufacturing process of connecting the main meeting place and the branch meeting place, which specifically includes the following steps:
step B1: singing by a singer in the branch meeting place of the main meeting place; in this step, a complete song is segmented in advance, and the singers in the main scoring place sing the segments belonging to them, respectively. Optionally, the singer in the main conference room wears the earmuffs to listen to the accompaniment and the sound of the singer.
And step B2: and (D) synthesizing the sound and the picture of the conference hall, and splicing the part with the picture of the main conference hall A5.
And step B3: the main meeting place receives the video and audio streams transmitted from the branch meeting places, decodes the video and audio streams, and the integrated machine is responsible for audio synchronous splicing, outputting complete songs (accompaniment + main meeting place singer + branch meeting place singer), and performing sound and picture synchronous processing;
and step B4: the subsequent transmission and playback steps are the same as in the first part.
Example 3
In a specific implementation manner, as another embodiment of the present invention, the method includes a main meeting place and a branch meeting place audience interaction process, and specifically includes the following steps:
step C1: shooting auditorium of the branch meeting place by a camera, collecting video and audio, and transmitting the video and audio to a main meeting place after encoding;
and C2: the main meeting place decodes the stream transmitted from the S1, and XR processing is carried out on the video picture, and the video picture is rendered and output to an L-shaped screen consisting of a front large screen and a ground screen to form a naked eye 3D audience effect;
and C3: the audio obtained by decoding is sent to a sound console of a main meeting place, and a singer A can hear the sound (whooping, cheering, applause and the like) of the audience side of the branch meeting place through an ear return system;
and C4: the camera lens switching position faces to the front large screen, and the singer in the main meeting place and the audience in the branch meeting place on the front large screen are shot together to form the singing effect of the singer in the main meeting place in the audience in the branch meeting place;
and C5: the microphone is used for collecting the voice of a singer in a main meeting place, and the picture shot by the camera X is synchronized with the voice through the director and broadcaster all-in-one machine.
And C6: the player of the branch meeting place decodes and broadcasts the sound in an RTC mode, the large screen can display the singing effect of the singer of the meeting place in the audience of the branch meeting place, and the loudspeaker box of the branch meeting place plays the sound of the main meeting place, thereby realizing virtual picture interaction and sound two-way interaction.
Example 1
The XR-based method for interactive performance in different places is characterized in that after XR systems are respectively built in a main meeting place and a branch meeting place, the following interactive performance processes of the main meeting place and the branch meeting place are executed:
step A1: making a virtual background for background pictures of performers in the main meeting place and the branch meeting place;
step A2: planning a region to be spliced, and splitting the virtual background into two parts, wherein one part is a picture part displayed by a main meeting place, and the other part is a picture displayed by an L screen of a branch meeting place;
step A3: b, rendering by the main meeting place and the branch meeting place XR, respectively rendering and mapping the virtual background split in the step A2 on a back screen of the main meeting place and an L screen of the branch meeting place, and keeping the pictures of the main meeting place synchronous through time codes and frame alignment;
step A4: the main meeting place camera carries out main meeting place virtual-real fusion to shoot real virtual scenery, and the branch meeting place camera is responsible for shooting real virtual scenery through L-screen virtual-real fusion;
step A5: mixing the picture of the camera in the branch meeting place with the voice of the singer in the branch meeting place, and then carrying out coding transmission;
step A6: the main meeting place receives the streams transmitted from the branch meeting places, decodes the streams, and carries out splicing and synthesis on pictures by the all-in-one machine to form complete pictures which are monitored by the all-in-one machine;
step A7: and performing VR implantation on the spliced video, rendering and outputting, adopting an RTC mode, pushing the video to the cloud by the integrated machine, playing by a player in the branch meeting place, displaying a picture on a large screen in the branch meeting place, and playing audio by a plurality of sound boxes.
Example 2
On the basis of the embodiment 1, the method further comprises the following steps of executing the wheat connecting production process of the main meeting place and the branch meeting place:
step B1: singing by a singer in the branch meeting place of the main meeting place;
and step B2: a5, synthesizing the sound and the picture of the branch meeting place, and splicing the part with the picture of the main branch meeting place;
and step B3: the main meeting place receives the video and audio streams transmitted from the branch meeting places, decodes the video and audio streams, and the integrated machine is responsible for synchronously splicing the audio streams, outputting complete songs and carrying out sound and picture synchronous processing;
and step B4: the subsequent transmission and playback steps are the same as in the first part.
Example 3
On the basis of the embodiment 1, the method further comprises the following audience interaction process of the main meeting place and the branch meeting place:
step C1: the method comprises the following steps that a camera shoots auditorium of a sub-meeting place, video and audio are collected and are transmitted to a main meeting place after being coded;
and C2: the main meeting place decodes the stream transmitted by the C1, XR processing is carried out on the video picture, and the video picture is rendered and output to an L-shaped screen consisting of a front large screen and a ground screen to form a naked eye 3D audience effect;
and C3: the audio obtained by decoding is sent to a sound console of a main meeting place, and a singer hears the sound of the audience side of the branch meeting place through an ear return system;
and C4: the camera lens switching position faces to the front large screen, and the singer in the main conference place and the audience in the branch conference place on the front large screen are shot together to form the singing effect of the singer in the main conference place in the audience in the branch conference place;
and C5: the microphone is used for collecting the voice of the singer in the main meeting place, and the picture shot by the camera is synchronized with the voice through the director all-in-one machine.
Step C6: the player of the branch meeting place decodes and broadcasts the sound in an RTC mode, the large screen displays the singing effect of the singer of the meeting place in the audience of the branch meeting place, and the loudspeaker box of the branch meeting place plays the sound of the main meeting place, thereby realizing virtual picture interaction and sound two-way interaction.
Example 4
On the basis of embodiment 1, in step A4, the method comprises the sub-steps of: and taking the information of the camera position of the main meeting place shooting camera as a standard, wherein the camera position information of the branch meeting place camera is consistent with that of the main meeting place camera.
Example 5
On the basis of embodiment 1, in step A5, the method comprises the sub-steps of: the camera for shooting the fictitious performance of the branch meeting place is kept consistent with the camera position of the main meeting place, if the switching between the lens is involved, the camera position of the main meeting place is firstly cut to only shoot the lens of the actor of the main meeting place, and at the moment, the lens of the branch meeting place is kept consistent with the next lens to be cut of the main meeting place.
Example 6
On the basis of embodiment 1, in step A6, the complete picture is the main meeting place actor and the branch meeting place actor and the complete background.
Example 7
On the basis of embodiment 1, in step A7, the method comprises the sub-steps of: and synthesizing the video of the main meeting place and the video transmitted from the branch meeting places to form the effect of performing the remote actors in the same virtual scene.
Example 8
On the basis of embodiment 2, in step B1, the sub-steps are included: a complete song is segmented in advance, and singers in the main scoring place sing the segments belonging to the complete song respectively; the singer in the main and branch conference places wears the earmuffs to monitor the accompaniment and the sound of the singer.
Example 9
On the basis of embodiment 2, in step B3, the complete song is an accompaniment and main venue singer and a branch venue singer.
Example 10
On the basis of embodiment 2, in step C3, the singer hears the sounds including shouting aloud, cheering and applause on the audience side of the branch hall through the ear return system.
The parts not involved in the present invention are the same as or can be implemented using the prior art.
The above-described embodiments are intended to be illustrative only, and various modifications and variations such as those described in the above-described embodiments of the invention may be readily made by those skilled in the art based upon the teachings and teachings of the present invention without departing from the spirit and scope of the invention.
Other embodiments than the above examples may be devised by those skilled in the art based on the foregoing disclosure, or by adapting and using knowledge or techniques of the relevant art, and features of various embodiments may be interchanged or substituted and such modifications and variations that may be made by those skilled in the art without departing from the spirit and scope of the present invention are intended to be within the scope of the following claims.

Claims (10)

1. The method for the remote interactive performance based on the XR is characterized in that after an XR system is respectively built in a main meeting place and a branch meeting place, the following interactive performance processes of the main meeting place and the branch meeting place are executed:
step A1: making a virtual background for background pictures of performers in the main meeting place and the branch meeting place;
step A2: planning a region to be spliced, and splitting the virtual background into two parts, wherein one part is a picture part displayed by a main meeting place, and the other part is a picture displayed by an L screen of a branch meeting place;
step A3: b, rendering by the main meeting place and the branch meeting place XR, respectively rendering and mapping the virtual background split in the step A2 on a back screen of the main meeting place and an L screen of the branch meeting place, and keeping the pictures of the main meeting place synchronous through time codes and frame alignment;
step A4: the main meeting place camera carries out the virtual-real fusion shooting of the main meeting place to the virtual scene of the real person, and the branch meeting place camera is responsible for carrying out the virtual-real fusion shooting of the L screen to the virtual scene of the real person;
step A5: mixing the picture of the camera in the branch meeting place with the voice of the singer in the branch meeting place, and then carrying out coding transmission;
step A6: the main meeting place receives the streams transmitted from the branch meeting places, decodes the streams, and carries out splicing and synthesis on pictures by the all-in-one machine to form complete pictures which are monitored by the all-in-one machine;
step A7: and performing VR implantation on the spliced video, rendering and outputting, adopting an RTC mode, pushing the video to the cloud by the integrated machine, playing by a player in the branch meeting place, displaying a picture on a large screen in the branch meeting place, and playing audio by a plurality of sound boxes.
2. The method of XR-based interactive performance in a different location as claimed in claim 1 further comprising performing a main meeting place and branch meeting place masa-linking process as follows:
step B1: singing by a singer in the branch meeting place of the main meeting place;
and step B2: a5, synthesizing the sound and the picture of the branch meeting place, and splicing the part with the picture of the main branch meeting place;
and step B3: the main meeting place receives the video and audio streams transmitted from the branch meeting places, decodes the video and audio streams, and the integrated machine is responsible for synchronously splicing the audio streams, outputting complete songs and carrying out sound and picture synchronous processing;
and step B4: the subsequent transmission and playback steps are the same as in the first part.
3. The method of XR-based interactive performances in a remote location, as claimed in claim 1, further comprising performing the following audience interaction processes at the main and branch sites:
step C1: shooting auditorium of the branch meeting place by a camera, collecting video and audio, and transmitting the video and audio to a main meeting place after encoding;
and step C2: the main meeting place decodes the stream transmitted by the C1, XR processing is carried out on the video picture, and the video picture is rendered and output to an L-shaped screen consisting of a front large screen and a ground screen to form a naked eye 3D audience effect;
and C3: the audio obtained by decoding is sent to a sound console of a main meeting place, and a singer hears the sound of the audience side of the branch meeting place through an ear return system;
and C4: the camera lens switching position faces to the front large screen, and the singer in the main meeting place and the audience in the branch meeting place on the front large screen are shot together to form the singing effect of the singer in the main meeting place in the audience in the branch meeting place;
and C5: the microphone is used for collecting the voice of the singer in the main meeting place, and the picture shot by the camera is synchronized with the voice through the director all-in-one machine.
Step C6: the player of the branch meeting place decodes and broadcasts the sound in an RTC mode, the large screen displays the singing effect of the singer of the meeting place in the audience of the branch meeting place, and the loudspeaker box of the branch meeting place plays the sound of the main meeting place, thereby realizing virtual picture interaction and sound two-way interaction.
4. The method of XR-based off-site interactive performance as claimed in claim 1 comprising the sub-steps of, in step A4: and taking the position angle information of the camera shot by the main meeting place as the standard, wherein the position information of the cameras in the branch meeting places is consistent with that of the camera in the main meeting place.
5. The method of XR-based off-site interactive performance as claimed in claim 1 comprising the sub-steps of: the camera for shooting the fictitious performance of the branch meeting place is kept consistent with the camera position of the main meeting place, if the switching between the lens is involved, the camera position of the main meeting place is firstly cut to only shoot the lens of the actor of the main meeting place, and at the moment, the lens of the branch meeting place is kept consistent with the next lens to be cut of the main meeting place.
6. The method of XR-based interactive performance as recited in claim 1, wherein the full scene is the main and branch site actors and the full background at step A6.
7. The method for XR-based off-site interactive performance as claimed in claim 1 comprising the sub-steps of: and synthesizing the video of the main meeting place and the video transmitted from the branch meeting places to form the effect of performing the remote actors in the same virtual scene.
8. The method of XR-based off-site interactive performance of claim 2, comprising in step B1 the sub-steps of: a complete song is segmented in advance, and singers in the main scoring place sing the segments belonging to the complete song respectively; the singer in the main and branch conference places wears the earmuffs to monitor the accompaniment and the sound of the singer.
9. The method of XR-based off-site interactive performance of claim 2, wherein in step B3 the complete song is an accompaniment and main and branch venue singers.
10. The method of XR-based interactive performance as claimed in claim 2 wherein in step C3 the singer hears the audience-side sounds of the branch room through the earreturn system including whooping, cheering and applause.
CN202210819362.8A 2022-07-13 2022-07-13 XR-based remote interactive performance method Active CN115209172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210819362.8A CN115209172B (en) 2022-07-13 2022-07-13 XR-based remote interactive performance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210819362.8A CN115209172B (en) 2022-07-13 2022-07-13 XR-based remote interactive performance method

Publications (2)

Publication Number Publication Date
CN115209172A true CN115209172A (en) 2022-10-18
CN115209172B CN115209172B (en) 2023-07-07

Family

ID=83579608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210819362.8A Active CN115209172B (en) 2022-07-13 2022-07-13 XR-based remote interactive performance method

Country Status (1)

Country Link
CN (1) CN115209172B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439587A (en) * 2022-11-08 2022-12-06 成都索贝数码科技股份有限公司 2.5D rendering method based on object visual range
CN115657862A (en) * 2022-12-27 2023-01-31 海马云(天津)信息技术有限公司 Method and device for automatically switching virtual KTV scene pictures, storage medium and equipment
CN115802165A (en) * 2023-02-10 2023-03-14 成都索贝数码科技股份有限公司 Lens moving shooting method applied to live connection of different places and same scenes
CN116781958A (en) * 2023-08-18 2023-09-19 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN116931737A (en) * 2023-08-03 2023-10-24 重庆康建光电科技有限公司 System and method for realizing virtual reality interaction between person and scene
CN117149016A (en) * 2023-10-26 2023-12-01 锋尚文化集团股份有限公司 Virtual object control method, device and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170361A (en) * 2011-03-16 2011-08-31 西安电子科技大学 Virtual-reality-based network conference method
CN104702936A (en) * 2015-03-31 2015-06-10 王子强 Virtual reality interaction method based on glasses-free 3D display
US20160330408A1 (en) * 2015-04-13 2016-11-10 Filippo Costanzo Method for progressive generation, storage and delivery of synthesized view transitions in multiple viewpoints interactive fruition environments
CN106210703A (en) * 2016-09-08 2016-12-07 北京美吉克科技发展有限公司 The utilization of VR environment bust shot camera lens and display packing and system
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
US20180061138A1 (en) * 2016-08-31 2018-03-01 Factual VR, Inc. Virtual reality system
CN111447460A (en) * 2020-05-15 2020-07-24 杭州当虹科技股份有限公司 Method for applying low-delay network to broadcasting station
CN112135158A (en) * 2020-09-17 2020-12-25 重庆虚拟实境科技有限公司 Live broadcasting method based on mixed reality and related equipment
CN112492231A (en) * 2020-11-02 2021-03-12 重庆创通联智物联网有限公司 Remote interaction method, device, electronic equipment and computer readable storage medium
US11100695B1 (en) * 2020-03-13 2021-08-24 Verizon Patent And Licensing Inc. Methods and systems for creating an immersive character interaction experience
CN114401414A (en) * 2021-12-27 2022-04-26 北京达佳互联信息技术有限公司 Immersive live broadcast information display method and system and information push method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170361A (en) * 2011-03-16 2011-08-31 西安电子科技大学 Virtual-reality-based network conference method
CN104702936A (en) * 2015-03-31 2015-06-10 王子强 Virtual reality interaction method based on glasses-free 3D display
US20160330408A1 (en) * 2015-04-13 2016-11-10 Filippo Costanzo Method for progressive generation, storage and delivery of synthesized view transitions in multiple viewpoints interactive fruition environments
US20180061138A1 (en) * 2016-08-31 2018-03-01 Factual VR, Inc. Virtual reality system
CN106210703A (en) * 2016-09-08 2016-12-07 北京美吉克科技发展有限公司 The utilization of VR environment bust shot camera lens and display packing and system
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
US11100695B1 (en) * 2020-03-13 2021-08-24 Verizon Patent And Licensing Inc. Methods and systems for creating an immersive character interaction experience
CN111447460A (en) * 2020-05-15 2020-07-24 杭州当虹科技股份有限公司 Method for applying low-delay network to broadcasting station
CN112135158A (en) * 2020-09-17 2020-12-25 重庆虚拟实境科技有限公司 Live broadcasting method based on mixed reality and related equipment
CN112492231A (en) * 2020-11-02 2021-03-12 重庆创通联智物联网有限公司 Remote interaction method, device, electronic equipment and computer readable storage medium
CN114401414A (en) * 2021-12-27 2022-04-26 北京达佳互联信息技术有限公司 Immersive live broadcast information display method and system and information push method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHANSHAN YIN: "Design of Broadcasting and Hosting Information Guiding Platform based on Virtual Environment Data Modeling with Mixed Reality System", 2022 6TH INTERNATIONAL CONFERENCE ON TRENDS IN ELECTRONICS AND INFORMATICS (ICOEI) *
沈锦昌: "XR智能演播室系统建设与应用解析", 广播电视信息, no. 4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439587A (en) * 2022-11-08 2022-12-06 成都索贝数码科技股份有限公司 2.5D rendering method based on object visual range
CN115439587B (en) * 2022-11-08 2023-02-14 成都索贝数码科技股份有限公司 2.5D rendering method based on object visual range
CN115657862A (en) * 2022-12-27 2023-01-31 海马云(天津)信息技术有限公司 Method and device for automatically switching virtual KTV scene pictures, storage medium and equipment
CN115802165A (en) * 2023-02-10 2023-03-14 成都索贝数码科技股份有限公司 Lens moving shooting method applied to live connection of different places and same scenes
CN116931737A (en) * 2023-08-03 2023-10-24 重庆康建光电科技有限公司 System and method for realizing virtual reality interaction between person and scene
CN116781958A (en) * 2023-08-18 2023-09-19 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN116781958B (en) * 2023-08-18 2023-11-07 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN117149016A (en) * 2023-10-26 2023-12-01 锋尚文化集团股份有限公司 Virtual object control method, device and system
CN117149016B (en) * 2023-10-26 2024-01-30 锋尚文化集团股份有限公司 Virtual object control method, device and system

Also Published As

Publication number Publication date
CN115209172B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN115209172B (en) XR-based remote interactive performance method
CN106792246B (en) Method and system for interaction of fusion type virtual scene
CN106789991B (en) Multi-person interactive network live broadcast method and system based on virtual scene
US20190090028A1 (en) Distributing Audio Signals for an Audio/Video Presentation
US8289367B2 (en) Conferencing and stage display of distributed conference participants
CN210021183U (en) Immersive interactive panoramic holographic theater and performance system
US10531158B2 (en) Multi-source video navigation
US6937295B2 (en) Realistic replication of a live performance at remote locations
Schreer et al. Ultrahigh-resolution panoramic imaging for format-agnostic video production
US20110304735A1 (en) Method for Producing a Live Interactive Visual Immersion Entertainment Show
KR20180052494A (en) Conference system for big lecture room
JP2006041886A (en) Information processor and method, recording medium, and program
US20090153550A1 (en) Virtual object rendering system and method
CN113259544B (en) Remote interactive holographic demonstration system and method
Schweiger et al. Tools for 6-Dof immersive audio-visual content capture and production
KR20090000550A (en) Methode of a cyber public performance on stage with 3d display
Sugawara et al. Super hi-vision at the London 2012 Olympics
US20220343951A1 (en) Method and apparatus for production of a real-time virtual concert or collaborative online event
KR102273439B1 (en) Multi-screen playing system and method of providing real-time relay service
CN113315885A (en) Holographic studio and system for remote interaction
Koide et al. Development of high-resolution virtual reality system by projecting to large cylindrical screen
CN114079799A (en) Music live broadcast system and method based on virtual reality
Batke et al. Spatial audio processing for interactive TV services
CN117939102B (en) Data processing system for realizing broadcast and television engineering application based on XR model
Thomas et al. State‐of‐the‐Art and Challenges in Media Production, Broadcast and Delivery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant