CN110225224B - Virtual image guiding and broadcasting method, device and system - Google Patents

Virtual image guiding and broadcasting method, device and system Download PDF

Info

Publication number
CN110225224B
CN110225224B CN201910605279.9A CN201910605279A CN110225224B CN 110225224 B CN110225224 B CN 110225224B CN 201910605279 A CN201910605279 A CN 201910605279A CN 110225224 B CN110225224 B CN 110225224B
Authority
CN
China
Prior art keywords
data
real
virtual
scene
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910605279.9A
Other languages
Chinese (zh)
Other versions
CN110225224A (en
Inventor
周健巍
唐锋
陈泽鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xingludong Technology Co ltd
Original Assignee
Beijing Xingludong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xingludong Technology Co ltd filed Critical Beijing Xingludong Technology Co ltd
Priority to CN201910605279.9A priority Critical patent/CN110225224B/en
Publication of CN110225224A publication Critical patent/CN110225224A/en
Application granted granted Critical
Publication of CN110225224B publication Critical patent/CN110225224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The invention relates to a method, a device and a system for guiding and broadcasting an virtual image, wherein the method comprises the following steps: receiving data of a real scene; transmitting the data of the real scene to a virtual scene generation platform so that the virtual scene generation platform can generate the data of the virtual scene according to the data of the real scene; transmitting a control signal to the virtual scene generation platform to control generation of data of the virtual scene; receiving data of the virtual scene generated by the virtual scene generation platform; mixing and overlapping the data of the virtual scene and the data of the real scene to obtain performance data; and outputting the performance data. By utilizing the method, the device and the system for guiding the virtual image, the virtual scene can be controlled in real time, and the virtual scene and the real scene can be mixed and overlapped conveniently.

Description

Virtual image guiding and broadcasting method, device and system
Technical Field
The invention relates to the technical field of virtual animation production and multimedia live broadcasting, in particular to a method, a device and a system for guiding and broadcasting an virtual image.
Background
The broadcasting guide platform commonly used in the industry at present is a set of huge broadcasting equipment combination, and comprises a plurality of monitors, sound consoles, lamp consoles and other equipment. Each path of media signals are transmitted to the guide platform through a signal line, an operator switches the media signals, and the guide function is mainly used.
The existing guide platform is numerous and complicated in construction, needs to consume a great deal of time and energy for installation and disassembly, has a single function, and is difficult to meet the complex and various requirements in the video and audio production and performance process of the virtual image (or called virtual idol and virtual character) especially when the program production and performance live broadcast of the virtual image are carried out.
In addition, in the existing virtual image video production and performance flow, all scenes, lights, shots, roles and props need to be configured in advance, after the flow is started, if adjustment is needed, the whole flow needs to be stopped, and then the whole flow needs to be reconfigured and changed. Meanwhile, the virtual scene is completely bound with the real scene, props and effects cannot be added by virtue of air, and the controllability is poor and the degree of freedom is low.
Disclosure of Invention
The invention aims to provide a novel method, device and system for guiding and broadcasting an virtual image, which can control a virtual scene in real time and facilitate the mixed superposition of the virtual scene and a real scene.
The aim of the invention is achieved by adopting the following technical scheme. The method for guiding and broadcasting the virtual image provided by the invention comprises the following steps: receiving data of a real scene; transmitting the data of the real scene to a virtual scene generation platform so that the virtual scene generation platform can generate the data of the virtual scene according to the data of the real scene; transmitting a control signal to the virtual scene generation platform to control generation of data of the virtual scene; receiving data of the virtual scene generated by the virtual scene generation platform; mixing and overlapping the data of the virtual scene and the data of the real scene to obtain performance data; and outputting the performance data.
The object of the invention can be further achieved by the following technical measures.
In the foregoing method for broadcasting, the sending a control signal to the virtual scene generating platform to control the generation of the data of the virtual scene includes: receiving a first type of operation instruction of an operator, and sending the first type of operation instruction to the virtual scene generation platform to control the generation of data of the virtual scene; the step of mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps: and receiving a second type of operation instruction of an operator, and processing the data of the virtual scene before the mixed superposition, the data of the real scene before the mixed superposition or the result of the mixed superposition according to the second type of operation instruction.
In the foregoing method for broadcasting, the receiving data of a real scene includes: receiving one or more of real picture data, real sound data, state parameters and coordinate parameters of the real scene;
wherein the real picture data comprises one or more of a picture of the real scene acquired in real time, a picture of the pre-recorded real scene and a picture of the pre-processed real scene; the real sound data includes one or more of sound of an actor of an avatar, sound of a prop, special effect or character in the real scene, pre-recorded sound; the state parameters of the real scene are used for representing the states of real objects in the real scene; the coordinate parameters include one or more of a position, an orientation, a size of the real object in the real scene; the real object comprises one or more of a real light, a real lens, a real background, a real prop, a real dress, a real character and a real special effect.
In the foregoing method for broadcasting, the sending the data of the real scene to a virtual scene generating platform, so that the virtual scene generating platform generates the data of the virtual scene according to the data of the real scene includes: and sending the coordinate parameters and the state parameters of the real objects to the virtual scene generation platform so that the virtual scene generation platform maps one or more real objects into the virtual scene in real time according to the coordinate parameters and the state parameters to obtain virtual objects.
In the foregoing method for broadcasting, the receiving a first type of operation instruction of an operator, and sending the first type of operation instruction to the virtual scene generating platform to control generation of data of the virtual scene includes one or more of the following steps:
receiving a first operation instruction of an operator, and sending the first operation instruction to the virtual scene generation platform to control the virtual objects in the virtual scene, wherein the first operation instruction comprises real-time switching and adjustment of virtual props, virtual clothes and virtual backgrounds and real-time adding or deleting of the virtual props, the virtual clothes and the virtual backgrounds in the virtual scene;
Receiving a second operation instruction of an operator, and sending the second operation instruction to the virtual scene generation platform to add one or more virtual special effects to the virtual scene in real time; receiving a third operation instruction of an operator, and sending the third operation instruction to the virtual scene generation platform to control the virtual special effect, wherein the third operation instruction comprises the steps of adjusting the size, the position, the orientation, the style and the time for starting or interrupting the virtual special effect in real time;
receiving a fourth operation instruction of an operator, and sending the fourth operation instruction to the virtual scene generation platform to control virtual lamplight in the virtual scene, wherein the fourth operation instruction comprises the steps of adjusting the state parameter of the virtual lamplight in real time, adjusting the coordinate parameter of the virtual lamplight, and adding or deleting lamplight in the virtual scene in real time; wherein the status parameters of the virtual light include one or more of color, projection size, projection style;
receiving a fifth operation instruction of an operator, and sending the fifth operation instruction to the virtual scene generation platform to control virtual shots in the virtual scene, wherein the fifth operation instruction comprises the steps of adjusting the state parameters of the virtual shots, adjusting the coordinate parameters of the virtual shots, adding or deleting shots in the virtual scene, and dynamically switching among a plurality of virtual shots to select a machine position shot; wherein the state parameters of the virtual lens comprise one or more of focal length, aperture and motion parameters;
Receiving a sixth operation instruction of an operator, and sending the sixth operation instruction to the virtual scene generation platform to manage the avatar, wherein the sixth operation instruction comprises one or more of assigning and switching the correspondence between one or more actors and one or more of the avatars, repairing the avatar, and displaying or hiding the whole or a part of the avatar.
In the foregoing method for booting, the receiving the data of the virtual scene generated by the virtual scene generating platform includes: receiving rendering picture data of the virtual scene rendered by the virtual scene generating platform by taking one or more virtual shots as references;
the receiving the second class operation instruction of the operator, according to the second class operation instruction, processing the data of the virtual scene before the mixed superposition, the data of the real scene before the mixed superposition, or the result of the mixed superposition includes one or more of the following steps:
receiving a seventh operation instruction of an operator, and selecting the rendering picture data to be used from a plurality of rendering picture data or selecting the real picture data to be used from a plurality of real picture data according to the seventh operation instruction;
Receiving an eighth operation instruction of an operator, and adding transition animation between a plurality of rendering picture data, or between a plurality of real picture data, or between the rendering picture data and the real picture data according to the eighth operation instruction.
In the foregoing method for broadcasting, the mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the broadcasting data includes: and mixing and overlapping the real picture data and the rendering picture data according to the coordinate parameters of the real object and the coordinate parameters of the virtual object to obtain the performance picture data.
The foregoing broadcasting guiding method further includes receiving material sound data before the step of sending the data of the real scene to the virtual scene generating platform, where the material sound data includes one or more of background sound, music, pre-cut sound, special effect sound, and collision sound; the sending the data of the real scene to a virtual scene generation platform comprises the following steps: receiving a data request sent by the virtual scene generating platform, and sending the required real sound data and/or the material sound data to the virtual scene generating platform according to the data request so as to enable the virtual scene generating platform to perform visual rendering according to the real sound data and/or the material sound data or generate virtual scene sound data according to the real sound data and/or the material sound data; the receiving the data of the virtual scene generated by the virtual scene generation platform comprises: receiving the virtual scene sound data; the step of mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps: and mixing and processing the real sound data and the virtual scene sound data to obtain the performance sound data.
In the foregoing method for guiding and broadcasting, the receiving the second type of operation instruction of the operator, according to the second type of operation instruction, processing the data of the virtual scene before the hybrid superimposition, the data of the real scene before the hybrid superimposition, or the result of the hybrid superimposition includes: and receiving a ninth operation instruction of an operator, and adjusting the virtual scene sound data before mixing, the real sound data before mixing or the performance sound data obtained by mixing according to the ninth operation instruction, wherein the adjustment comprises the steps of adding sound effects, adjusting parameters of sound playing or inserting or interrupting the playing of sound in the middle.
In the foregoing method for booting, the receiving the data of the virtual scene generated by the virtual scene generating platform includes: receiving data of a plurality of virtual scenes generated from data of one of the real scenes; the receiving the second class operation instruction of the operator, according to the second class operation instruction, processing the data of the virtual scene before the mixed superposition, or the data of the real scene before the mixed superposition, or the result of the mixed superposition includes: and receiving a tenth operation instruction of an operator, and switching among the virtual scenes in real time according to the tenth operation instruction.
In the foregoing method for broadcasting, the receiving data of a real scene further includes: receiving time information corresponding to data of the real scene, wherein the data of the real scene is dynamic data of a plurality of frames; the sending a control signal to the virtual scene generation platform to control generation of data of the virtual scene includes: transmitting the time information to the virtual scene generation platform so that the virtual scene generation platform generates multi-frame virtual scene data with time marks corresponding to multi-frame real scene data according to the time information; the receiving the data of the virtual scene generated by the virtual scene generation platform comprises: receiving data of the virtual scene with the time stamp; the step of mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps: and mixing and superposing corresponding frames between the data of the virtual scene with the time mark and the data of the real scene of the multiple frames according to the time information, so as to offset and correct the synchronization of the virtual scene and the frames.
The aim of the invention is also achieved by adopting the following technical scheme. The device for guiding the virtual image according to the invention comprises: the real scene receiving module is used for receiving data of a real scene; the real scene sending module is used for sending the data of the real scene to the virtual scene generating platform so that the virtual scene generating platform can generate the data of the virtual scene according to the data of the real scene; the virtual scene control module is used for sending a control signal to the virtual scene generation platform so as to control the generation of the data of the virtual scene; a virtual scene receiving module for receiving data of the virtual scene generated by the virtual scene generating platform; the mixed processing module is used for mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the performance data; and the output module is used for outputting the performance data.
The object of the invention can be further achieved by the following technical measures.
In the foregoing apparatus, the virtual scene control module includes a first type of operation control sub-module, configured to: receiving a first type of operation instruction of an operator, and sending the first type of operation instruction to the virtual scene generation platform to control the generation of data of the virtual scene; the hybrid processing module comprises a second type of operation control sub-module for: and receiving a second type of operation instruction of an operator, and processing the data of the virtual scene before the mixed superposition, the data of the real scene before the mixed superposition or the result of the mixed superposition according to the second type of operation instruction.
The foregoing broadcast guiding device, the real scene receiving module is specifically configured to: receiving one or more of real picture data, real sound data, state parameters and coordinate parameters of the real scene;
wherein the real picture data comprises one or more of a picture of the real scene acquired in real time, a picture of the pre-recorded real scene and a picture of the pre-processed real scene; the real sound data includes one or more of sound of an actor of an avatar, sound of a prop, special effect or character in the real scene, pre-recorded sound; the state parameters of the real scene are used for representing the states of real objects in the real scene; the coordinate parameters include one or more of a position, an orientation, a size of the real object in the real scene; the real object comprises one or more of a real light, a real lens, a real background, a real prop, a real dress, a real character and a real special effect.
The foregoing broadcast guiding device, the real scene sending module is specifically configured to: and sending the coordinate parameters and the state parameters of the real objects to the virtual scene generation platform so that the virtual scene generation platform maps one or more real objects into the virtual scene in real time according to the coordinate parameters and the state parameters to obtain virtual objects.
The first type of operation control submodule of the guiding and broadcasting device comprises one or more of a scene control unit, a special effect control unit, a light control unit, a lens control unit and a role management unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the scene control unit is used for: receiving a first operation instruction of an operator, and sending the first operation instruction to the virtual scene generation platform to control the virtual objects in the virtual scene, wherein the first operation instruction comprises real-time switching and adjustment of virtual props, virtual clothes and virtual backgrounds and real-time adding or deleting of the virtual props, the virtual clothes and the virtual backgrounds in the virtual scene;
the special effect control unit is used for: receiving a second operation instruction of an operator, sending the second operation instruction to the virtual scene generation platform to add one or more virtual special effects to the virtual scene in real time, and receiving a third operation instruction of the operator, sending the third operation instruction to the virtual scene generation platform to control the virtual special effects, wherein the third operation instruction comprises the steps of adjusting the size, the position, the orientation and the style of the virtual special effects and starting or interrupting the time of the virtual special effects in real time;
The light control unit is used for: receiving a fourth operation instruction of an operator, and sending the fourth operation instruction to the virtual scene generation platform to control virtual lamplight in the virtual scene, wherein the fourth operation instruction comprises the steps of adjusting the state parameter of the virtual lamplight in real time, adjusting the coordinate parameter of the virtual lamplight, and adding or deleting lamplight in the virtual scene in real time; the state parameters of the virtual light comprise one or more of color, projection size and projection style;
the lens control unit is used for: receiving a fifth operation instruction of an operator, and sending the fifth operation instruction to the virtual scene generation platform to control virtual shots in the virtual scene, wherein the fifth operation instruction comprises the steps of adjusting the state parameters of the virtual shots, adjusting the coordinate parameters of the virtual shots, adding or deleting shots in the virtual scene, and dynamically switching among a plurality of virtual shots to select a machine position shot; wherein the state parameters of the virtual lens comprise one or more of focal length, aperture and motion parameters;
the role management unit is used for: receiving a sixth operation instruction of an operator, and sending the sixth operation instruction to the virtual scene generation platform to manage the avatar, wherein the sixth operation instruction comprises one or more of assigning and switching the correspondence between one or more actors and one or more of the avatars, repairing the avatar, and displaying or hiding the whole or a part of the avatar.
In the foregoing apparatus, the virtual scene receiving module is specifically configured to receive rendering frame data of the virtual scene that is rendered by the virtual scene generating platform with reference to one or more virtual shots;
the second type of operation control sub-module comprises a rendering screen editing unit, which is used for performing one or more of the following steps:
receiving a seventh operation instruction of an operator, and selecting the rendering picture data to be used from a plurality of rendering picture data or selecting the real picture data to be used from a plurality of real picture data according to the seventh operation instruction;
receiving an eighth operation instruction of an operator, and adding transition animation between a plurality of rendering picture data, or between a plurality of real picture data, or between the rendering picture data and the real picture data according to the eighth operation instruction.
The aforesaid leads and broadcasts device, the mixed processing module still includes the picture and mixes submodule piece for: and mixing and overlapping the real picture data and the rendering picture data according to the coordinate parameters of the real object and the coordinate parameters of the virtual object to obtain the performance picture data.
The guiding device further comprises a material sound receiving module for receiving the material sound data; wherein the material sound data comprises one or more of background sound, music, pre-cut sound, special effect sound and collision sound; the real scene sending module comprises a real sound sending unit, a virtual scene generating platform and a virtual scene generating module, wherein the real sound sending unit is used for receiving a data request sent by the virtual scene generating platform, sending the real sound data and/or the material sound data required by the data request to the virtual scene generating platform according to the data request, so that the virtual scene generating platform can perform visual rendering according to the real sound data and/or the material sound data, or generate virtual scene sound data according to the real sound data and/or the material sound data; the virtual scene receiving module comprises a virtual scene sound receiving unit and is used for receiving the virtual scene sound data; the mixing processing module further comprises a sound mixing processing sub-module, and the sound mixing processing sub-module is used for mixing and processing the real sound data and the virtual scene sound data to obtain the performance sound data.
In the foregoing broadcast guiding device, the second type of operation control sub-module includes a sound control unit, configured to: and receiving a ninth operation instruction of an operator, and adjusting the virtual scene sound data before mixing, the real sound data before mixing or the performance sound data obtained by mixing according to the ninth operation instruction, wherein the adjustment comprises the steps of adding sound effects in real time, adjusting parameters of sound playing or inserting or interrupting the playing of sound in the middle.
In the foregoing apparatus, the virtual scene receiving module is specifically configured to receive data of a plurality of virtual scenes generated according to data of one real scene; the second-type operation control submodule comprises a virtual scene switching unit and is used for: and receiving a tenth operation instruction of an operator, and switching among the virtual scenes in real time according to the tenth operation instruction.
In the foregoing apparatus, the real scene receiving module further includes a first time information receiving unit, configured to receive time information corresponding to data of the real scene; wherein the data of the real scene is dynamic data of a plurality of frames; the virtual scene control module comprises a time information sending unit, a virtual scene generating platform and a virtual scene control unit, wherein the time information sending unit is used for sending the time information to the virtual scene generating platform so that the virtual scene generating platform can generate multi-frame virtual scene data with time marks corresponding to the multi-frame real scene data according to the time information; the virtual scene receiving module is specifically configured to receive data of the virtual scene with the time stamp; the mixing processing module is specifically configured to perform mixing superposition on corresponding frames between the data of the virtual scene with the time stamp and the data of the real scene of the multiple frames according to the time information, so as to offset and correct the virtual scene and frame synchronization.
The aim of the invention is also achieved by adopting the following technical scheme. The invention provides an avatar guiding and broadcasting system, which comprises a real scene platform, a virtual scene generating platform and any one of the avatar guiding and broadcasting devices; the virtual scene generation platform is in communication connection with the guide device, and is used for acquiring data of a real scene and sending the data to the guide device or sending preset data of the real scene to the guide device, and is used for generating data of a virtual scene according to the data provided by the guide device and sending the data to the guide device.
The object of the invention can be further achieved by the following technical measures.
In the foregoing broadcast guiding system, the real scene platform includes an active capturing device, configured to collect real-time coordinate parameters and/or status parameters of real objects in the real scene.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the method, the device and the system for guiding the virtual image have at least the following advantages and beneficial effects:
(1) The invention can control the virtual scene in real time, add-delete adjustment is carried out on objects such as light, props and the like in the virtual scene in real time, and operations such as dynamic switching, push-pull panning and the like are carried out on the machine position lens;
(2) According to the invention, by connecting the object in the real scene with the object in the virtual scene, the object parameters in the real scene can be mapped into the virtual scene in real time, so that the effect of the virtual scene and the effect in the real scene are synchronous;
(3) The invention realizes synchronization of the sound and ensures that the sound of the real scene and the virtual scene does not have delay;
(4) The invention allows the processing and modification of the video effect in real time; after receiving the rendering picture of the virtual scene, the virtual scene can be overlapped and mixed with the video signal of the real scene, and can also be processed and optimized in real time.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention, as well as the preferred embodiments thereof, together with the following detailed description of the invention given in conjunction with the accompanying drawings.
Drawings
Fig. 1 is a flowchart of a method of booting an avatar according to an embodiment of the present invention;
fig. 2 is a schematic view of an avatar booting method of an embodiment of the present invention;
fig. 3 is a block diagram illustrating a structure of an avatar guiding apparatus according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to the specific implementation, structure, characteristics and effects of the method, device and system for guiding the virtual image according to the invention with reference to the accompanying drawings and the preferred embodiments.
In the process of making video and audio of the virtual image, performing a program or performing live broadcast, besides the fact that actors are required to perform deduction on the virtual image after the scenes, a real scene is required to be established, and one or more virtual scenes overlapped with the real scene are established on a specific computer platform based on the real scene.
Fig. 1 is a schematic flow chart diagram of one embodiment of an avatar booting method of the present invention. Fig. 2 is a schematic diagram of an avatar booting method according to an embodiment of the present invention. Referring to fig. 1 and 2, the method for guiding an avatar according to the present invention mainly includes the steps of:
Step S11, data of a real scene is received. Alternatively, the data of the real scene may be dynamic data of a plurality of frames that change in real time. One or more real objects are contained in a real scene. Alternatively, the real objects include one or more of a real background (or referred to as a real environment), a real lens, a real light, a real person, a real prop, a real garment, a real special effect, and the like. Wherein the real person includes a host, a guest, a spectator, etc. in a real scene. The realistic special effects include realistic special effect devices and special effects produced by them, such as stage smoke, cool fireworks, stage snowflakes, stage flames, etc. Optionally, the data of the real scene includes picture data of the real scene and sound data of the real scene.
Step S12, the data of the real scene is sent to the virtual scene generating platform, so that the virtual scene generating platform can generate the data of the virtual scene according to the data of the real scene. Optionally, the virtual scene includes a virtual background, a virtual lens, a virtual light, an avatar, a virtual prop, a virtual costume, a virtual special effect, and the like. Alternatively, the avatar is not obtained from the aforementioned real character in the real scene, but is generated in real time using an animation technique based on information of the actions, expressions, sounds, etc. of the actors of the avatar located behind the scenes. Optionally, the data of the virtual scene includes picture data of the virtual scene rendered, and sound data of the virtual scene generated. Alternatively, the virtual scene generation platform is an array of rendering machines, also referred to as a rendering platform, for generating virtual scenes.
Step S13, a control signal is sent to the virtual scene generation platform to control generation of data of the virtual scene, and the rendering effect of the virtual scene is adjusted.
Optionally, the control signal includes a command signal and a frame synchronization signal. The instruction signal is a command for performing intervention or control on objects in the virtual scene, and is used for adjusting the states and rendering parameters of the objects in the virtual scene in real time. The frame synchronization signal is a time signal of millisecond level sent to the virtual scene by a fixed time difference, the virtual scene returns the rendered picture together with the time signal after each frame is rendered, and the frame synchronization signal is used for offset correction, so that the picture of the virtual scene can be completely aligned with the real scene.
Step S14, receiving data of a virtual scene generated by the virtual scene generation platform.
And S15, mixing and overlapping the data of the virtual scene and the data of the real scene to obtain the performance data. Optionally, the presentation data is multimedia data comprising an audio-visual signal. Optionally, the data of the virtual scene, the data of the real scene or the result of the hybrid superimposition is processed before or after the hybrid superimposition, including screening the data, adjusting parameters, and the like.
And S16, outputting the performance data. Optionally, outputting the processed presentation data to a media platform, or directly outputting and storing the processed presentation data to a database as a material resource of a later person. The media platform includes but is not limited to online media such as broadcast television station, network live broadcast platform, and offline platform such as live performance platform, concert, theatre, exhibition, etc., and the database includes but is not limited to local database, cloud database.
It should be noted that, in some embodiments, the method for guiding the avatar of the present invention is implemented by a software or a series of software sets, where the software or the software sets are run in a specific carrier or carriers, and the carriers are connected in a certain topology structure, so as to control and adjust the virtual scene in the process of making the video, playing the program, or broadcasting the live performance of the avatar. The software or the carrier may be referred to as a bootstrapping platform. The carrier includes, but is not limited to, a mobile terminal such as a desktop computer, a tablet computer, a mobile phone, an embedded platform, etc. An operator of the director platform operates the software through input devices including a key mouse, a handle, a touch screen, etc. The operator is a user of the director and the broadcasting platform, and comprises a director, an actor, a special effect engineer, a light engineer and the like for video and audio production.
In some embodiments, this step S11 includes: one or more of real picture data, real sound data, state parameters, and coordinate parameters of a real scene are received, and other forms of resource data of the real scene other than the picture data and the sound data may be received. The step S12 includes: and sending one or more of the received real picture data, the received real sound data, the received state parameters and the received coordinate parameters to a virtual scene generation platform, so as to establish one or more virtual scenes overlapped with the real scene according to the real picture data, the received real sound data, the received state parameters and the received coordinate parameters.
Wherein the real picture data includes, but is not limited to, one or more of a picture of a real scene, a picture of a prerecorded real scene, a picture of a pre-processed real scene, which is acquired in real time through a real lens such as an imaging device. The pictures of the real scene comprise pictures of real background, real lamplight, real props, real special effects, real characters such as host guest spectators and the like in the real scene.
The real sound data includes, but is not limited to, one or more of the sound of an actor of an avatar, the sound of a prop, special effect or character in a real scene, a pre-recorded sound. It should be noted that the actors of the avatar are typically behind the scenes and not in the real scene.
The state parameter is used to represent the state of each real object in the real scene. The state parameters of the real scene include, but are not limited to, one or more of state parameters of real light, state parameters of a real lens, state parameters of a real background, state parameters of a real prop, state parameters of a real garment, state parameters of a real special effect. It is noted that the status parameter types of different objects may be different. Optionally, the state parameters of the real light include parameters such as color, projection size, projection style, etc.; the state parameters of the real lens comprise parameters such as focal length, aperture, model and the like, and motion parameters of the real lens such as linear speed, linear acceleration, angular speed, angular acceleration and the like; the state parameters of the real background, the real prop and the real dress comprise parameters such as color, material and the like; the state parameters of the real special effects comprise starting or interrupting states, starting or interrupting time, special effect patterns and the like.
The coordinate parameters include, but are not limited to, one or more of a position, an orientation, a size of a real object in a real scene. Note that the above-described motion parameters such as linear velocity, linear acceleration, angular velocity, and angular acceleration may be regarded as the coordinate parameters.
In an alternative embodiment, the coordinate parameters and/or the status parameters received in the aforementioned step S11 are preset. Specifically, before step S11, the method for guiding an avatar according to the example of the present invention further includes: pre-discussing a script, pre-determining preset coordinates and states of real objects such as backgrounds, props, clothes, characters and the like in a real scene and various changes in the performance process according to the script so as to obtain layout data of the real scene, and pre-storing the layout data in a local database; the step S11 specifically includes: the layout data is read directly and can be modified on the basis of this when in use.
In an alternative embodiment, the coordinate parameters and/or the state parameters received in the aforementioned step S11 are acquired in real time, for example, the coordinate parameters and a part of the state parameters may be acquired by a dynamic capture technique, and the state parameters may be transferred in real time through a communication connection. Specifically, before step S11, the method for guiding an avatar according to the example of the present invention further includes: pasting a reflective capturing point on a real object to be acquired of data in a real scene in advance; the step S11 specifically includes: the method comprises the steps of acquiring coordinate parameters of a real object by utilizing optical dynamic capturing equipment, receiving the coordinate parameters acquired in real time by the dynamic capturing technology, and receiving state parameters of the real object by communication connection with the real object.
In yet another alternative embodiment, the coordinate parameters and/or status parameters received in the foregoing step S11 may be partially preset and partially acquired in real time, for example, initial values of the coordinate parameters may be preset and real-time values of the coordinate parameters during the performance may be acquired in real time.
In some embodiments, the method for guiding the avatar according to the present invention includes: a designated position in a real scene is taken as an origin in advance, a length unit is determined, and a three-dimensional coordinate system is established; based on the coordinate system, determining coordinate parameters such as positions, orientations, sizes and the like of all objects in the real scene by utilizing a layout diagram of the real scene which is determined through pre-discussion or by means of actual measurement and the like; setting the origin and coordinate units of coordinates of the virtual scene to be identical to those of the real scene; mapping all objects in the real scene into the virtual scene; and then, carrying out three-dimensional rendering on the virtual scene according to the data obtained from the real scene to obtain a series of data.
In some embodiments, this step S12 includes: and sending the coordinate parameters and the state parameters of the real objects to the virtual scene generation platform, so that the virtual scene generation platform maps one or more real objects into the virtual scene in real time according to the coordinate parameters and the state parameters, and obtaining the virtual objects corresponding to the real objects. Note that, without mapping all real objects in a real scene into a virtual scene, only a part of the real objects may be mapped into the virtual scene. Optionally, the coordinate parameters and the state parameters of the virtual object in the virtual scene are set to be completely identical to the real object in the real scene.
As an alternative embodiment, the step S12 specifically includes: the method comprises the steps of sending coordinate parameters and state parameters of real props, real clothes and real backgrounds to a virtual scene generating platform so that the virtual scene generating platform can map the real props, the real clothes and the real backgrounds into a virtual scene in real time to obtain corresponding virtual props, virtual clothes and virtual backgrounds; the coordinate parameters and the state parameters of the real lamplight are sent to a virtual scene generation platform, so that the virtual scene generation platform maps the real lamplight into a virtual scene in real time to obtain virtual lamplight corresponding to the real lamplight; and sending the coordinate parameters and the state parameters of the real lens to a virtual scene generation platform so that the virtual scene generation platform maps the real lens into the virtual scene in real time to obtain the virtual lens corresponding to the real lens.
In some embodiments, the control signal in step S13 may be a control signal automatically sent by the multicast platform, or may be an operation instruction sent according to an artificial operation. Optionally, the step S13 specifically includes: a first type of operating instruction of an operator (or user) is received and sent to a virtual scene generation platform to control generation of data of a virtual scene. By utilizing the guiding method of the virtual image, which is provided by the invention, in the process of video and audio production, program playing or live performance of the virtual image, operators are allowed to manually send control signals to the virtual scene generating platform by utilizing the guiding platform so as to adjust and intervene the state and rendering parameters of the virtual scene in real time.
In some embodiments, the processing in step S15 may be performed by the multicast platform automatically according to a preset manner, or may be performed according to a manual operation. Optionally, the step S15 specifically includes: and receiving a second type of operation instruction of an operator, and processing the data of the virtual scene before the mixed superposition, the data of the real scene before the mixed superposition or the result of the mixed superposition according to the second type of operation instruction so as to obtain the presentation data to be output. By utilizing the guide method of the virtual image, which is provided by the invention, operators are allowed to manually adjust and intervene the to-be-output performance data in real time by utilizing the guide platform in the process of making the video and audio of the virtual image, performing the program or performing the live broadcast.
It should be noted that the present invention does not limit the manner in which the operator issues the operation instruction, and the operator may use a variety of interactions with the multicast platform to issue the operation instruction. As an alternative, a controllable graphical interface for interaction is preset and provided to an operator, and the operator issues various operation instructions to the broadcasting platform by operating the graphical interface. For example, the graphical interface is preset with a plurality of graphical controls, and an operator sends the first type of operation instruction or the second type of operation instruction by operating each graphical control.
As an alternative embodiment, the step S13 specifically includes: and receiving a first operation instruction of an operator, and sending the first operation instruction to the virtual scene generation platform to control the virtual object in the virtual scene. The virtual object control method comprises the steps of switching and adjusting virtual props, virtual clothes and virtual backgrounds in a virtual scene in real time, and adding or deleting the virtual props, the virtual clothes and the virtual backgrounds in the virtual scene in real time.
As an alternative embodiment, the step S13 specifically includes: receiving a second operation instruction of an operator, and sending the second operation instruction to a virtual scene generation platform to add one or more virtual special effects to the virtual scene in real time; and receiving a third operation instruction of an operator, and sending the third operation instruction to the virtual scene generation platform to control the virtual special effect. Wherein, controlling the virtual special effects includes adjusting parameters such as the size, position, orientation, style, and timing of starting or interrupting the virtual special effects in real time. Optionally, the virtual effects include three-dimensional effects such as fireworks, flames, etc. and/or planar effects such as pictures, expression packs, hint text, coding, etc.
As an alternative embodiment, the step S13 specifically includes: and receiving a fourth operation instruction of an operator, and sending the fourth operation instruction to the virtual scene generation platform to control virtual lamplight in the virtual scene. The virtual light control method comprises the steps of adjusting state parameters such as color, projection size and projection style of the virtual light in real time, adjusting coordinate parameters of the virtual light, shielding the light, or adding or deleting the light in the virtual scene in real time, so as to modify rendering effect of the virtual scene in real time and irradiate light from any angle.
As an alternative embodiment, the step S13 specifically includes: and receiving a fifth operation instruction of an operator, and sending the fifth operation instruction to the virtual scene generation platform to control the virtual lens in the virtual scene. The virtual lens is controlled to adjust state parameters such as focal length, aperture, motion parameters and the like of the virtual lens in real time, adjust coordinate parameters of the virtual lens, shield the lens, dynamically switch among a plurality of virtual lenses to select a machine position lens, process the virtual lens, add or delete the lens in a virtual scene, so as to realize the lens effect of pushing, pulling, shaking, moving, shaking and the like of the virtual lens, obtaining an angle which cannot be reached in a real scene, and further solve the problem that the appointed lens effect cannot be shot due to dead angle of the machine position and shake of a camera operator in the past film and television manufacturing process. It should be noted that not only is it allowed to map a real shot to a corresponding virtual shot in a virtual scene, but also shots that are not in a real scene are added or subtracted in the virtual scene and any adjustments are made.
As an alternative embodiment, the step S13 specifically includes: and receiving a sixth operation instruction of the operator, and sending the sixth operation instruction to the virtual scene generation platform so that the virtual scene generation platform can manage the virtual image according to the sixth operation instruction. Wherein managing avatars includes one or more of assigning, switching correspondence between actors of one or more avatars and one or more avatars, repairing an erroneous avatar, displaying, or hiding an entirety or a portion of the avatar. By using the method for guiding and broadcasting the virtual figures, which is provided by the invention, one or more different virtual roles can be appointed for the actors according to the program requirement by managing the corresponding relation between the actors and the virtual figures, so that the effect that one actor simultaneously deducts a plurality of virtual figures is realized.
In particular, the management of the avatar may be achieved by managing the dynamic capture data. In some embodiments, the aforementioned managing the avatar specifically includes one or more of the following operations:
mapping of dynamic capture data, allowing mapping of data onto different models;
the dynamic capture data is repaired, and the situation of frame loss and part loss possibly occurs due to the influence of the field environment, so that the dynamic capture data is repaired;
The mask for dynamic capture data allows blocking dynamic capture data of a certain location.
It should be noted that, in some embodiments, the step S13 includes one or more of the steps of controlling the objects in the virtual scene, controlling the virtual special effects, controlling the virtual lights, controlling the virtual shots, and managing the avatars.
Further, in some embodiments, the step S14 includes: rendering screen data of the virtual scene rendered by the virtual scene generation platform based on the specified one or more virtual shots is received.
After this step S14, the process of processing in step S15 specifically includes one or more of the following steps:
receiving a seventh operation instruction of an operator, and selecting rendering picture data to be used from a plurality of rendering picture data or selecting real picture data to be used from a plurality of real picture data according to the seventh operation instruction, so that the operator can autonomously determine which picture to use as an output picture according to performance requirements;
and receiving an eighth operation instruction of an operator, and adding transition animation between a plurality of rendering picture data, or between a plurality of real picture data, or between the rendering picture data and the real picture data according to the eighth operation instruction so as to add transition effects to the picture when the picture is switched or when the lens is switched. Wherein the transition animation includes switching, mixing, gradual change, etc. Further, a plurality of transition animations are preset, and operations such as adding, deleting, changing, checking and the like are allowed to be performed on the preset transition animations; in the using process, the transition animation is added by designating the transition animation between two pictures from the preset transition animations, so that the designated transition animation is directly applied when the pictures are switched.
Note that the plurality of pieces of rendering screen data may be a plurality of pieces of rendering screen data obtained based on a plurality of virtual shots, or may be a plurality of pieces of rendering screen data obtained based on one virtual shot. The aforementioned plurality of real picture data includes: one or more of a plurality of real picture data obtained based on a plurality of real shots, a plurality of pieces of real picture data obtained based on one real shot, a plurality of pieces of pre-recorded real scene pictures, and a plurality of pieces of pre-processed real scene pictures.
In some embodiments, the process of hybrid superimposition in step S15 includes: and mixing and overlapping the data of the virtual scene with the data of the real scene according to the coordinate parameters of the real scene and the coordinate parameters of the virtual scene to obtain the performance data. As an alternative embodiment, the step S15 specifically includes: mixing and overlapping the real picture data and the rendering picture data according to the coordinate parameters of the real object in the real scene and the coordinate parameters of the virtual object in the virtual scene to obtain the performance picture data; the step S16 includes outputting the presentation frame data.
In some embodiments, the step S11 specifically includes receiving real sound data of a real scene. Also included before step S12 is receiving the material sound data. The material sound data includes one or more of background sound, music, pre-cut sound, special effect sound, collision sound and other audio data besides the real sound data. The virtual scene generation platform sends a data request to the broadcasting platform in the rendering process. The data request may also be referred to as a pull signal, which is automatically issued by the virtual scene generation platform. The data request is recorded with data required by the virtual scene generation platform. The step S12 specifically includes: receiving a data request sent by a virtual scene generating platform, after receiving the data request, replying, and sending the required real sound data and/or material sound data to the virtual scene generating platform according to the data request so as to enable the virtual scene generating platform to conduct visual rendering in a virtual scene according to the real sound data and/or the material sound data or enable the virtual scene generating platform to generate virtual scene sound data according to the real sound data and/or the material sound data. The step S14 specifically includes: and receiving virtual scene sound data processed by the virtual scene generation platform. The step S15 specifically includes: after the real sound data and the virtual scene sound data are obtained, respectively, the real sound data and the virtual scene sound data are mixed and processed to obtain the presentation sound data. Optionally, the processing includes adjusting the volume or pitch of the two paths of audio, and reducing noise. The step S16 specifically includes: and outputting the speech data to the outside. Optionally, the sound data of the real scene and the virtual scene are connected at the same time, so as to ensure that the real-time synthesized performance sound data can have sound effects of two sides at the same time, and no offset and no error occur.
The visual rendering according to the real sound data and/or the material sound data includes the virtual scene generating platform rendering the virtual scene by using the real sound data and/or the material sound data as raw materials. In some examples, one or more of the following scenarios are included:
generating or adjusting the mouth shape of the avatar according to the sound of the actor of the avatar in the real sound data so that the mouth shape of the avatar matches with the voice of the person in the sound, for example, the mouth of the avatar opens and closes with the presence or absence of the sound;
the brightness, color, and/or background color of the lights in the virtual scene are adjusted according to the real sound data, for example, when the sound is high, the lights are bright and light, and when the sound is low, the lights are dark and light is dark.
Optionally, the virtual scene sound data of the rendered virtual scene includes first virtual scene sound data and second virtual scene sound data. The second virtual scene sound data includes virtual scene sound data having various and uncertain playing time, such as special effect sound and collision sound, and is difficult to judge the playing time point externally. The second virtual scene sound data is directly played by the virtual scene generating platform, and the virtual scene generating platform only sends the first virtual scene sound data to the guiding platform.
In some embodiments, the processing in step S15 specifically includes: and receiving a ninth operation instruction of an operator, and adjusting the virtual scene sound data before mixing, the real sound data before mixing or the mixed performance sound data according to the ninth operation instruction. Wherein adjusting the sound data includes adding sound effects in real time, adjusting parameters of sound playback, or inserting or interrupting playback of sound halfway. Parameters of sound playing include playing speed, volume, tone and the like.
In some embodiments, the virtual scene generation platform is used to generate data for a plurality of virtual scenes. Optionally, the data of the plurality of virtual scenes is generated from data of a real scene. The aforementioned step S14 includes: data of the plurality of virtual scenes is received to access the plurality of virtual scenes simultaneously. The processing in step S15 includes: and receiving a tenth operation instruction of an operator, and switching among the multiple virtual scenes in real time according to the tenth operation instruction. By utilizing the virtual image guiding and broadcasting method provided by the invention, the real scene and different virtual scenes can be mixed and overlapped conveniently, and the time and labor consumption for rearranging the scenes are avoided.
In practice, especially in the case where the data of the real scene is dynamic data of a plurality of frames, there may be a time offset between the data of the real scene and the data of the virtual scene, not necessarily completely synchronous, for example, there may be an offset between the rendered picture data obtained by rendering and the real picture data obtained by photographing. Therefore, time information can be allocated to the data of the real scene and the data of the virtual scene, and when the real scene and the virtual scene are mixed and overlapped, the data of the real scene and the data of the virtual scene corresponding to the time information are mixed and overlapped so as to achieve the effects of synchronization and alignment. Specifically, in some embodiments, this step S11 further includes receiving time information corresponding to data of the real scene. Wherein the data of the real scene is dynamic data of a plurality of frames. The step S13 includes: the time information is sent to a virtual scene generation platform, so that the virtual scene generation platform generates multi-frame virtual scene data with time marks corresponding to multi-frame real scene data according to the time information when generating the virtual scene data. The step S14 includes: data of the virtual scene with the time stamp is received. The step S15 includes: and mixing and superposing corresponding frames between the data of the virtual scene with the time mark of the multi-frame and the data of the real scene of the multi-frame according to the time information. By using the virtual image broadcasting guiding method provided by the invention, the data of the virtual scene and the data of the real scene are aligned by using the time information, and the two paths of data are mixed and overlapped into one path after the alignment, so that the virtual scene can be offset corrected, and the effect of frame synchronization is achieved.
Alternatively, the foregoing allocation of time information for the data of the real scene and the data of the virtual scene may be implemented in various ways. For example, a time node/timestamp field may be marked in the data of each frame of real scene, for marking the acquisition time of each frame of data, and the same time node/timestamp field may be marked in the data of the virtual scene of the corresponding frame; alternatively, the time information may be a frame synchronization signal sent by a frame synchronizer, and the alignment of the virtual scene and the real scene is achieved by using the frame synchronization signal. Specifically, as an alternative embodiment, the step S11 further includes: and receiving a frame synchronization signal transmitted by the real scene at a fixed time difference. Wherein, the data of the real scene is dynamic data of a plurality of frames, and one frame synchronization signal corresponds to one frame of data of the real scene. The frame synchronization signal is typically on the order of milliseconds. Since the frame synchronization signal is used to mark time, it may also be referred to as a time signal. The step S13 specifically includes: after receiving the frame synchronization signal, pushing the frame synchronization signal to the virtual scene generation platform in a specific format. The virtual scene generation platform returns the data of the virtual scenes of the plurality of frames obtained by rendering together with the frame synchronizing signal after each frame is rendered. The step S14 specifically includes: a frame synchronization signal corresponding to the data of the virtual scene returned by the virtual scene generating platform is received while receiving the data of the virtual scene of a plurality of frames, so that a time node of the data of the virtual scene of each frame is marked with the frame synchronization signal. The step S15 specifically includes: after receiving the data of the virtual scene and the frame synchronization signal, aligning the data of the virtual scene of the frames with the data of the real scene of the frames according to the frame synchronization signal, and then carrying out mixed superposition.
In some embodiments, multiple operators are allowed to use different input devices with multiple director platforms while participating in the video production process of the avatar. Optionally, the plurality of director stations are equally responsible for different functional modules, and do not interfere with each other, so as to jointly influence the output of final performance data.
In some embodiments, this step S16 includes that after the director platform has taken the presentation data, it may push out at a certain rate. The rate of push-out depends on the application scenario, e.g. in a broadcast tv platform 50 frames/sec can be taken, and when online in-line shows or online live, 30 frames/sec or 60 frames/sec can be taken.
Fig. 3 is a schematic structural view of an embodiment of an avatar-guiding apparatus 100 of the present invention. Referring to fig. 2 and 3, the apparatus 100 for guiding an avatar according to the present invention mainly includes a real scene receiving module 110, a real scene transmitting module 120, a virtual scene control module 130, a virtual scene receiving module 140, a mixing processing module 150, and an output module 160.
The real scene receiving module 110 is configured to receive data of a real scene. One or more real objects are contained in a real scene. Alternatively, the real objects include one or more of a real background (or referred to as a real environment), a real lens, a real light, a real person, a real prop, a real garment, a real special effect, and the like.
The real scene sending module 120 is configured to send data of a real scene to the virtual scene generating platform, so that the virtual scene generating platform generates data of a virtual scene according to the data of the real scene. Optionally, the virtual scene includes a virtual background, a virtual lens, a virtual light, an avatar, a virtual prop, a virtual costume, a virtual special effect, and the like.
The virtual scene control module 130 is configured to send a control signal to the virtual scene generation platform to control generation of data of the virtual scene.
The virtual scene receiving module 140 is configured to receive data of a virtual scene generated by the virtual scene generating platform.
The blending module 150 is configured to blend and superimpose the data of the virtual scene and the data of the real scene to obtain the performance data. Optionally, the hybrid processing module 150 is further configured to process, before or after the hybrid superimposition, the data of the virtual scene, the data of the real scene, or the result of the hybrid superimposition, including filtering the data, adjusting the parameters, and so on.
The output module 160 is used for outputting the presentation data.
In some embodiments, the real scene receiving module 110 is specifically configured to: one or more of real picture data, real sound data, state parameters, and coordinate parameters of a real scene are received. Wherein the real picture data comprises one or more of a picture of a real scene acquired in real time, a picture of a prerecorded real scene and a picture of a pre-processed real scene; the real sound data includes one or more of sound of actors of the avatar, sound of props, special effects or characters in the real scene, pre-recorded sound; the state parameters of the real scene are used to represent the state of real objects in the real scene; the coordinate parameters include one or more of a position, an orientation, a size of a real object in the real scene.
In some embodiments, the real scene sending module 120 is specifically configured to: and sending the coordinate parameters and the state parameters of the real objects to the virtual scene generation platform, so that the virtual scene generation platform maps one or more real objects into the virtual scene in real time according to the coordinate parameters and the state parameters to obtain virtual objects corresponding to the real objects.
In some embodiments, the virtual scene control module 130 includes a first type of operation control sub-module (not shown). The first type of operation control submodule is used for: and receiving a first type of operation instruction of an operator, and sending the first type of operation instruction to the virtual scene generation platform to control the generation of data of the virtual scene.
In some embodiments, the hybrid processing module 150 includes a second type of operation control sub-module (not shown). The second type of operation control submodule is used for: and receiving a second class operation instruction of an operator, and processing the data of the virtual scene before the mixed superposition, the data of the real scene before the mixed superposition or the result of the mixed superposition according to the second class operation instruction so as to obtain the presentation data to be output.
In some embodiments, the first type of operation control submodule of the virtual scene control module 130 includes one or more of a scene control unit, a special effect control unit, a light control unit, a lens control unit, and a character management unit.
Wherein, this scene control unit is used for: and receiving a first operation instruction of an operator, and sending the first operation instruction to the virtual scene generation platform to control the virtual object in the virtual scene. Wherein, controlling the virtual object comprises switching and adjusting the virtual prop, the virtual dress and the virtual background in real time, and adding or deleting the virtual prop, the virtual dress and the virtual background in real time in the virtual scene.
The special effect control unit is used for: receiving a second operation instruction of an operator, sending the second operation instruction to the virtual scene generation platform to add one or more virtual special effects to the virtual scene in real time, receiving a third operation instruction of the operator, and sending the third operation instruction to the virtual scene generation platform to control the virtual special effects. Wherein controlling the virtual effect includes adjusting the size, position, orientation, style, and timing to start or interrupt the virtual effect in real time.
The light control unit is used for: and receiving a fourth operation instruction of an operator, and sending the fourth operation instruction to the virtual scene generation platform to control virtual lamplight in the virtual scene. Wherein, controlling the virtual light comprises adjusting the state parameter of the virtual light in real time, adjusting the coordinate parameter of the virtual light, and adding or deleting the light in real time in the virtual scene; the status parameters of the virtual light include one or more of color, projection size, projection style.
The lens control unit is used for: and receiving a fifth operation instruction of an operator, and sending the fifth operation instruction to the virtual scene generation platform to control the virtual lens in the virtual scene. Wherein controlling the virtual lens includes adjusting a state parameter of the virtual lens, adjusting a coordinate parameter of the virtual lens, adding or subtracting the lens in the virtual scene in real time; the state parameters of the virtual lens comprise one or more of focal length, aperture and motion parameters, wherein the motion parameters comprise one or more of linear velocity, linear acceleration, angular velocity and angular acceleration.
The character management unit is used for: and receiving a sixth operation instruction of the operator, and sending the sixth operation instruction to the virtual scene generation platform to manage the virtual image. Wherein managing avatars includes one or more of assigning and switching correspondence between one or more actors and one or more avatars, repairing errant avatars, displaying or hiding an entirety or a portion of the avatars.
In some embodiments, the virtual scene receiving module 140 is specifically configured to receive rendered screen data of a virtual scene rendered by the virtual scene generating platform based on one or more virtual shots.
In some embodiments, the second type of operation control sub-module includes a rendering screen editing unit for performing one or more of the following steps:
receiving a seventh operation instruction of an operator, and selecting rendering screen data to be used from a plurality of rendering screen data or selecting real screen data to be used from a plurality of real screen data according to the seventh operation instruction;
and receiving an eighth operation instruction of an operator, and adding transition animation between a plurality of rendering picture data, or between a plurality of real picture data, or between the rendering picture data and the real picture data according to the eighth operation instruction so as to add transition effects to the picture when the picture is switched or when the lens is switched. The transition animation comprises switching, mixing and gradual change. Further, a plurality of transition animations are preset, and operations such as adding, deleting, changing, checking and the like are allowed to be performed on the preset transition animations; in the using process, the transition animation is added by designating the transition animation between two pictures from the preset transition animations, so that the designated transition animation is directly applied when the pictures are switched.
In some embodiments, the blending module 150 includes a picture blending sub-module. The picture mixing submodule is used for: and mixing and overlapping the real picture data and the rendering picture data according to the coordinate parameters of the real object and the coordinate parameters of the virtual object to obtain the performance picture data. The output module 160 includes a picture output unit for outputting the presentation picture data.
In some embodiments, the avatar's guide apparatus 100 further includes a material sound receiving module (not shown) for receiving material sound data. Wherein the material sound data includes one or more of background sound, music, pre-cut sound, special effect sound, collision sound, and the like for editing sound materials forming the music. The real scene transmission module 120 includes a real sound transmission unit for: receiving a data request sent by the virtual scene generating platform, and sending the required real sound data and/or material sound data to the virtual scene generating platform according to the data request so as to enable the virtual scene generating platform to conduct visual rendering in the virtual scene according to the real sound data and/or the material sound data or enable the virtual scene generating platform to generate the virtual scene sound data according to the real sound data and/or the material sound data. The virtual scene receiving module 140 includes a virtual scene sound receiving unit for receiving virtual scene sound data. The mixing processing module 150 includes a sound mixing processing sub-module for mixing and processing real sound data and virtual scene sound data to obtain presentation sound data. The output module 160 includes a sound output unit for outputting presentation sound data.
The visual rendering according to the real sound data and/or the material sound data includes the virtual scene generating platform rendering the virtual scene by using the real sound data and/or the material sound data as raw materials. In some examples, the virtual scene generation platform includes one or more of the following:
a first sound visualization unit for generating or adjusting a mouth shape of the avatar according to the sound of the actor of the avatar in the real sound data so that the mouth shape of the avatar matches with the voice of the person in the sound, for example, the mouth of the avatar opens and closes with the presence or absence of the sound;
and the second sound visualization unit is used for adjusting the brightness, the color and/or the background color of the light in the virtual scene according to the real sound data, for example, the light is lightened when the sound is high, the color is lightened, and the light is darkened and the color is darkened when the sound is low.
In some embodiments, the aforementioned second-type operation control submodule includes a sound control unit, where the sound control unit is configured to receive a ninth operation instruction of an operator, and adjust virtual scene sound data before mixing, real sound data before mixing, or presentation sound data obtained by mixing according to the ninth operation instruction. Wherein, the adjusting of the sound data comprises adding sound effect in real time, adjusting parameters of sound playing or inserting or interrupting the playing of sound halfway.
In some embodiments, the virtual scene receiving module 140 is specifically configured to: data of a plurality of virtual scenes is received. Optionally, the data of the plurality of virtual scenes is generated from data of a real scene. The second type of operation control submodule includes a virtual scene switching unit, and the virtual scene switching unit is used for receiving a tenth operation instruction of an operator and switching among a plurality of virtual scenes in real time according to the tenth operation instruction.
In some embodiments, the real scene receiving module 110 further comprises a first time information receiving unit for receiving time information corresponding to data of the real scene. Wherein the data of the real scene is dynamic data of a plurality of frames. The virtual scene control module 130 includes a time information transmitting unit for transmitting the time information to the virtual scene generating platform, so that the virtual scene generating platform generates the data of the multi-frame virtual scene with the time stamp corresponding to the data of the real scene of the multi-frame according to the time information. The virtual scene receiving module 140 is specifically configured to receive the data of the virtual scene with the time stamp. The hybrid processing module 150 is specifically configured to: and mixing and superposing corresponding frames between the data of the virtual scene with the time mark of the multi-frame and the data of the real scene of the multi-frame according to the time information. Thus, the virtual scene and frame synchronization can be offset corrected using the avatar's playhead apparatus 100 according to the present invention.
Alternatively, the aforementioned time information is a frame synchronization signal sent by the frame synchronizer with a fixed time difference. As an alternative embodiment, the real scene receiving module 110 specifically includes a first frame synchronization signal receiving unit for receiving a frame synchronization signal transmitted by a real scene with a fixed time difference. Wherein the data of the real scene is the dynamic data of a plurality of frames, and the frame synchronizing signal corresponds to the data of the real scene. The virtual scene control module 130 specifically includes a frame synchronization signal sending unit, configured to push a frame synchronization signal to the virtual scene generating platform. The virtual scene receiving module 140 specifically includes a second frame synchronization signal receiving unit, configured to receive, while receiving data of a virtual scene of a plurality of frames, a frame synchronization signal corresponding to the data of the virtual scene returned by the virtual scene generating platform, so as to mark a time node of the data of the virtual scene of each frame with the frame synchronization signal. The hybrid processing module 150 is specifically configured to: and aligning the data of the virtual scenes of the multiple frames with the data of the real scenes of the multiple frames according to the frame synchronizing signals, and then carrying out mixed superposition to offset and correct the virtual scenes and achieve the effect of frame synchronization.
In some embodiments, the multiple avatars ' director 100 allows multiple operators to use multiple avatars ' director 100 simultaneously, using different input devices, to participate in the avatar's video production process. Optionally, different functional modules of the guide device 100 for multiple avatars are utilized, so that they do not interfere with each other, and the final presentation data output is affected together.
The invention also discloses an avatar guiding and broadcasting system, which comprises a real scene platform, a virtual scene generating platform and an avatar guiding and broadcasting device 100 of any one of the embodiments. The real scene platform is in communication connection with the avatar guiding device 100, and the avatar generating platform is in communication connection with the avatar guiding device 100.
The reality scene platform is used for: the data of the real scene is collected and transmitted to the director 100 of the avatar, or the data of the preset real scene is transmitted to the director 100 of the avatar. Optionally, the real scene platform includes devices in a real scene such as a real lens such as a camera, a microphone, a lighting device, a real special effect device, and the like.
Optionally, the real scene platform further comprises an active capturing device, which is used for acquiring real-time coordinate parameters and/or state parameters of real objects in the real scene. Specifically, an optical dynamic capturing apparatus may be used for acquiring real-time values of coordinate parameters and real-time values of a part of state parameters by capturing information of optical capturing points set in advance in a real scene.
Optionally, the real scene platform further comprises a local database, which is used for storing preset coordinate parameters and state parameters.
The virtual scene generation platform is used for generating virtual scene data according to the data provided by the virtual image guiding device 100, and sending the obtained virtual scene data to the virtual image guiding device 100 for further processing. Optionally, the virtual scene generation platform is a rendering machine array for generating virtual scenes.
Note that the data transmission methods involved in the steps of the foregoing embodiments or the apparatus devices include, but are not limited to, wireless communication connection methods such as Wi-Fi and bluetooth, and wired communication connection methods such as network cables, serial interfaces, parallel buses, SDI interface cables and HDMI interface cables.
It should be noted that in the foregoing description of the respective embodiments, distinction of data, signals, information is not made. For example, the output presentation data mentioned in step S16 in the foregoing method embodiment may be presentation data in the form of an output digital signal or presentation data in the form of an output analog signal.
The present invention is not limited to the above-mentioned embodiments, but is not limited to the above-mentioned embodiments, and any simple modification, equivalent changes and modification made to the above-mentioned embodiments according to the technical matters of the present invention can be made by those skilled in the art without departing from the scope of the present invention.

Claims (15)

1. A method for guiding an avatar, applied to a guiding platform, the method comprising the steps of:
receiving data of a real scene;
transmitting the data of the real scene to a virtual scene generation platform so that the virtual scene generation platform generates data of a virtual scene according to the data of the real scene, wherein the data of the virtual scene comprises state parameters and rendering parameters of the virtual scene, and the state parameters of the virtual scene comprise virtual prop parameters, virtual dress parameters and/or virtual background parameters;
Transmitting a control signal to the virtual scene generation platform to control the virtual scene generation platform to adjust the state parameters and rendering parameters of the virtual scene according to the control signal, and generating adjusted data of the virtual scene, wherein the adjustment comprises adding or deleting the state parameters of the virtual scene;
receiving data of the adjusted virtual scene generated by the virtual scene generation platform;
mixing and overlapping the adjusted data of the virtual scene and the data of the real scene to obtain performance data;
and outputting the performance data.
2. The avatar guiding method of claim 1, wherein,
the sending a control signal to the virtual scene generating platform to control the virtual scene generating platform to adjust the state parameters and the rendering parameters of the virtual scene according to the control signal, and generating the adjusted data of the virtual scene comprises:
receiving a first type of operation instruction of an operator, and sending the first type of operation instruction to the virtual scene generation platform to control the virtual scene generation platform to adjust the state parameters and rendering parameters of the virtual scene according to the control signal so as to generate adjusted virtual scene data;
The step of mixing and overlapping the adjusted data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps:
and receiving a second type of operation instruction of an operator, and processing the data of the adjusted virtual scene before the mixed superposition, the data of the real scene before the mixed superposition or the result of the mixed superposition according to the second type of operation instruction.
3. The avatar broadcasting method of claim 2, wherein the receiving data of the real scene includes: receiving one or more of real picture data, real sound data, state parameters and coordinate parameters of the real scene;
wherein the real picture data comprises one or more of a picture of the real scene acquired in real time, a picture of the pre-recorded real scene and a picture of the pre-processed real scene; the real sound data includes one or more of sound of an actor of an avatar, sound of a prop, special effect or character in the real scene, pre-recorded sound; the state parameters of the real scene are used for representing the state parameters of real objects in the real scene; the coordinate parameters include one or more of a position, an orientation, a size of the real object in the real scene; the real scene comprises one or more of real light, a real lens, a real background, a real prop, a real dress, a real character and a real special effect.
4. The avatar broadcasting method of claim 3, wherein the transmitting the data of the real scene to a virtual scene generation platform for the virtual scene generation platform to generate the data of the virtual scene according to the data of the real scene comprises:
and sending the coordinate parameters and the state parameters of the real scene to the virtual scene generation platform so that the virtual scene generation platform maps one or more real scenes into the virtual scene in real time according to the coordinate parameters and the state parameters of the real scene to obtain a virtual object.
5. The method for guiding an avatar according to claim 4, wherein receiving a first type of operation instruction of an operator, transmitting the first type of operation instruction to the virtual scene generation platform to control the virtual scene generation platform to adjust a state parameter and a rendering parameter of the virtual scene according to the control signal, and generating adjusted data of the virtual scene includes one or more of the following steps:
receiving a first operation instruction of an operator, and sending the first operation instruction to the virtual scene generation platform to control the virtual objects in the virtual scene, wherein the first operation instruction comprises the steps of switching and adjusting virtual props, virtual clothes and/or virtual backgrounds in real time and adding or deleting the virtual props, the virtual clothes and/or the virtual backgrounds in the virtual scene in real time;
Receiving a second operation instruction of an operator, and sending the second operation instruction to the virtual scene generation platform to add one or more virtual special effects to the virtual scene in real time;
receiving a third operation instruction of an operator, and sending the third operation instruction to the virtual scene generation platform to control the virtual special effect, wherein the third operation instruction comprises the steps of adjusting the size, the position, the orientation, the style and the time for starting or interrupting the virtual special effect in real time;
receiving a fourth operation instruction of an operator, and sending the fourth operation instruction to the virtual scene generation platform to control virtual lamplight in the virtual scene, wherein the fourth operation instruction comprises the steps of adjusting state parameters of the virtual lamplight in real time, adjusting coordinate parameters of the virtual lamplight and/or adding or deleting lamplight in the virtual scene in real time; wherein the status parameters of the virtual light include one or more of color, projection size, and/or projection style;
receiving a fifth operation instruction of an operator, and sending the fifth operation instruction to the virtual scene generation platform to control virtual shots in the virtual scene, wherein the fifth operation instruction comprises the steps of adjusting state parameters of the virtual shots in real time, adjusting coordinate parameters of the virtual shots, adding or deleting shots in the virtual scene and/or dynamically switching among a plurality of virtual shots to select a machine position shot; wherein the state parameters of the virtual lens include one or more of focal length, aperture, and/or motion parameters;
Receiving a sixth operation instruction of an operator, and sending the sixth operation instruction to the virtual scene generation platform to manage the avatar, including one or more of assigning and switching correspondence between one or more actors and one or more of the avatars, repairing the avatar, and/or displaying or hiding all or a portion of the avatar.
6. The avatar guiding method of claim 5, wherein:
the receiving the data of the adjusted virtual scene generated by the virtual scene generating platform comprises receiving rendering picture data of the adjusted virtual scene rendered by the virtual scene generating platform based on one or more virtual shots;
the receiving a second class of operation instruction of an operator, according to the second class of operation instruction, processing the data of the adjusted virtual scene before the mixed superposition, the data of the real scene before the mixed superposition, or the result of the mixed superposition includes one or more of the following steps:
receiving a seventh operation instruction of an operator, and selecting the rendering picture data to be used from a plurality of rendering picture data or selecting the real picture data to be used from a plurality of real picture data according to the seventh operation instruction;
Receiving an eighth operation instruction of an operator, and adding transition animation between a plurality of rendering picture data, or between a plurality of real picture data, or between the rendering picture data and the real picture data according to the eighth operation instruction.
7. The avatar rendering method of claim 6, wherein the mixing and overlaying the adjusted data of the virtual scene with the data of the real scene to obtain presentation data comprises:
and mixing and overlapping the real picture data and the rendering picture data according to the coordinate parameters of the real scene and the coordinate parameters of the virtual object to obtain the performance picture data.
8. The avatar guiding method of claim 3 or 4, wherein:
the method further comprises the step of receiving material sound data before the step of sending the data of the real scene to a virtual scene generation platform, wherein the material sound data comprises one or more of background sound, music, pre-cut sound, special effect sound and collision sound;
the sending the data of the real scene to a virtual scene generation platform comprises the following steps:
Receiving a data request sent by the virtual scene generating platform, and sending the required real sound data and/or the material sound data to the virtual scene generating platform according to the data request so as to enable the virtual scene generating platform to perform visual rendering according to the real sound data and/or the material sound data or generate adjusted virtual scene sound data according to the real sound data and/or the material sound data;
the receiving the adjusted virtual scene data generated by the virtual scene generation platform includes:
receiving the adjusted virtual scene sound data;
the step of mixing and overlapping the adjusted data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps:
and mixing and processing the real sound data and the adjusted virtual scene sound data to obtain the performance sound data.
9. The method for guiding the avatar according to claim 8, wherein the receiving the second type of operation instruction of the operator, according to the second type of operation instruction, processing the data of the adjusted virtual scene before the hybrid overlay, the data of the real scene before the hybrid overlay, or the result of the hybrid overlay includes:
And receiving a ninth operation instruction of an operator, and adjusting the adjusted virtual scene sound data before mixing, the real sound data before mixing or the performance sound data obtained by mixing according to the ninth operation instruction, wherein the adjustment comprises the steps of adding sound effects, adjusting parameters of sound playing or inserting or interrupting the playing of sound in the middle.
10. The avatar guiding method of claim 3 or 4, wherein:
the receiving the adjusted virtual scene data generated by the virtual scene generation platform includes:
receiving data of a plurality of adjusted virtual scenes generated from data of one of the real scenes;
the receiving the second class operation instruction of the operator, according to the second class operation instruction, processing the data of the adjusted virtual scene before the mixed superposition, or the data of the real scene before the mixed superposition, or the result of the mixed superposition includes:
and receiving a tenth operation instruction of an operator, and switching among the adjusted virtual scenes in real time according to the tenth operation instruction.
11. The avatar broadcasting method of claim 3, wherein:
the receiving data of the real scene further includes:
receiving time information corresponding to data of the real scene, wherein the data of the real scene is dynamic data of a plurality of frames;
the sending a control signal to the virtual scene generating platform to control the virtual scene generating platform to adjust the state parameters and the rendering parameters of the virtual scene according to the control signal, and generating the adjusted data of the virtual scene comprises:
transmitting the time information to the virtual scene generation platform so that the virtual scene generation platform generates multi-frame virtual scene data with time marks corresponding to multi-frame real scene data according to the time information;
the receiving the adjusted virtual scene data generated by the virtual scene generation platform includes:
receiving data of the virtual scene with the time stamp;
the step of mixing and overlapping the adjusted data of the virtual scene and the data of the real scene to obtain the performance data comprises the following steps:
and mixing and superposing corresponding frames between the data of the virtual scene with the time mark and the data of the real scene of the multiple frames according to the time information, so as to offset and correct the synchronization of the virtual scene and the frames.
12. An avatar guiding apparatus applied to a guiding platform, the guiding apparatus comprising:
the real scene receiving module is used for receiving data of a real scene;
the virtual scene generating platform is used for generating virtual scene data according to the real scene data, wherein the virtual scene data comprise virtual scene state parameters and rendering parameters, and the virtual scene state parameters comprise virtual props, virtual clothes and/or virtual backgrounds;
the virtual scene control module is used for sending a control signal to the virtual scene generation platform so as to control the virtual scene generation platform to adjust the state parameters and rendering parameters of the virtual scene according to the control signal to generate adjusted virtual scene data, wherein the adjustment comprises adding or deleting the state parameters of the virtual scene;
a virtual scene receiving module for receiving data of the adjusted virtual scene generated by the virtual scene generating platform;
the mixed processing module is used for mixing and overlapping the adjusted data of the virtual scene and the data of the real scene to obtain performance data;
And the output module is used for outputting the performance data.
13. An avatar-based director for use with a director platform, said director comprising a module or unit for performing the steps of any one of claims 1 to 11.
14. An avatar guiding system comprising a real scene platform, a virtual scene generation platform, and an avatar guiding device as claimed in any one of claims 12 to 13; the virtual scene generation platform is in communication connection with the guide device, and is used for acquiring data of a real scene and sending the data to the guide device or sending preset data of the real scene to the guide device, and is used for generating data of an adjusted virtual scene according to the data provided by the guide device and sending the data to the guide device.
15. The avatar rendering system of claim 14, wherein the real scene platform includes an dynamic capture device for capturing real-time coordinate parameters and/or status parameters of real objects in the real scene.
CN201910605279.9A 2019-07-05 2019-07-05 Virtual image guiding and broadcasting method, device and system Active CN110225224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910605279.9A CN110225224B (en) 2019-07-05 2019-07-05 Virtual image guiding and broadcasting method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910605279.9A CN110225224B (en) 2019-07-05 2019-07-05 Virtual image guiding and broadcasting method, device and system

Publications (2)

Publication Number Publication Date
CN110225224A CN110225224A (en) 2019-09-10
CN110225224B true CN110225224B (en) 2023-05-16

Family

ID=67812709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910605279.9A Active CN110225224B (en) 2019-07-05 2019-07-05 Virtual image guiding and broadcasting method, device and system

Country Status (1)

Country Link
CN (1) CN110225224B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598983A (en) * 2020-05-18 2020-08-28 北京乐元素文化发展有限公司 Animation system, animation method, storage medium, and program product
CN111756956B (en) * 2020-06-23 2023-04-14 网易(杭州)网络有限公司 Virtual light control method and device, medium and equipment in virtual studio
CN114187429B (en) * 2021-11-09 2023-03-24 北京百度网讯科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN114401442B (en) * 2022-01-14 2023-10-24 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium
CN114721772A (en) * 2022-03-25 2022-07-08 乐元素科技(北京)股份有限公司 Virtual equipment management method and device
CN114885186A (en) * 2022-06-09 2022-08-09 厦门理工学院 Automatic directing system based on multi-source information fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349020A (en) * 2014-12-02 2015-02-11 北京中科大洋科技发展股份有限公司 Virtual camera and real camera switching system and method
CN109889914A (en) * 2019-03-08 2019-06-14 腾讯科技(深圳)有限公司 Video pictures method for pushing, device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465957B (en) * 2008-12-30 2011-01-26 应旭峰 System for implementing remote control interaction in virtual three-dimensional scene
CN107248195A (en) * 2017-05-31 2017-10-13 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of augmented reality
CN207460313U (en) * 2017-12-04 2018-06-05 上海幻替信息科技有限公司 Mixed reality studio system
CN107888890A (en) * 2017-12-25 2018-04-06 河南新汉普影视技术有限公司 It is a kind of based on the scene packing device synthesized online and method
CN108200445B (en) * 2018-01-12 2021-02-26 北京蜜枝科技有限公司 Virtual playing system and method of virtual image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349020A (en) * 2014-12-02 2015-02-11 北京中科大洋科技发展股份有限公司 Virtual camera and real camera switching system and method
CN109889914A (en) * 2019-03-08 2019-06-14 腾讯科技(深圳)有限公司 Video pictures method for pushing, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110225224A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110225224B (en) Virtual image guiding and broadcasting method, device and system
US10582182B2 (en) Video capture and rendering system control using multiple virtual cameras
US10121284B2 (en) Virtual camera control using motion control systems for augmented three dimensional reality
CN109639933B (en) Method and system for making 360-degree panoramic program in virtual studio
CN110070594B (en) Three-dimensional animation production method capable of rendering output in real time during deduction
US8885022B2 (en) Virtual camera control using motion control systems for augmented reality
CN105938541B (en) System and method for enhancing live performances with digital content
CN113115110B (en) Video synthesis method and device, storage medium and electronic equipment
US20110304735A1 (en) Method for Producing a Live Interactive Visual Immersion Entertainment Show
KR102371031B1 (en) Apparatus, system, method and program for video shooting in virtual production
WO2020133372A1 (en) Video subtitle processing method and broadcast direction system
US11615755B1 (en) Increasing resolution and luminance of a display
KR20180052494A (en) Conference system for big lecture room
CN107888890A (en) It is a kind of based on the scene packing device synthesized online and method
CN112019921A (en) Body motion data processing method applied to virtual studio
WO2023236656A1 (en) Method and apparatus for rendering interactive picture, and device, storage medium and program product
CN110300242A (en) The performance system of virtual image
KR101834925B1 (en) A system and method for 3d virtual studio broadcasting that synchronizes video and audio signals by converting object position change into vector
CN113315885B (en) Holographic studio and system for remote interaction
CN112019922A (en) Facial expression data processing method applied to virtual studio
CN207910926U (en) It is a kind of based on the scene packaging system being virtually implanted
CN207652589U (en) It is a kind of based on the scene packing device synthesized online
CN108280882A (en) A kind of method and system being implanted into AR foreground article positions in virtual display space
CN111694324A (en) On-site control device and on-site control system
CN114885146B (en) Large screen-based multi-machine-position virtual fusion method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100022 13 / F, 1212, building 16, 89 Jianguo Road, Chaoyang District, Beijing

Applicant after: Beijing xingludong Technology Co.,Ltd.

Address before: 100022 1507, 12 / F, building 8, courtyard 88, Jianguo Road, Chaoyang District, Beijing

Applicant before: Beijing Le Element Culture Development Co.,Ltd.

GR01 Patent grant
GR01 Patent grant