CN112446938A - Multi-mode-based virtual anchor system and method - Google Patents

Multi-mode-based virtual anchor system and method Download PDF

Info

Publication number
CN112446938A
CN112446938A CN202011377505.1A CN202011377505A CN112446938A CN 112446938 A CN112446938 A CN 112446938A CN 202011377505 A CN202011377505 A CN 202011377505A CN 112446938 A CN112446938 A CN 112446938A
Authority
CN
China
Prior art keywords
unit
data
emotion
based virtual
atmosphere
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011377505.1A
Other languages
Chinese (zh)
Other versions
CN112446938B (en
Inventor
王晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Space Shichuang Chongqing Technology Co ltd
Original Assignee
Chongqing Space Visual Creation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Space Visual Creation Technology Co ltd filed Critical Chongqing Space Visual Creation Technology Co ltd
Priority to CN202011377505.1A priority Critical patent/CN112446938B/en
Publication of CN112446938A publication Critical patent/CN112446938A/en
Application granted granted Critical
Publication of CN112446938B publication Critical patent/CN112446938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the technical field of artificial intelligence, and particularly relates to a multi-mode-based virtual anchor system and a multi-mode-based virtual anchor method. The system comprises: the modeling unit is used for creating a virtual anchor role model; the acquisition unit is used for acquiring deductive data, wherein the deductive data comprises action data, expression data and voice data; the analysis unit is used for carrying out emotion analysis according to the deduction data to obtain the current emotion; the storage unit is used for storing the shot video pictures; the synthesis unit is used for virtualizing the sound data to obtain voice data and associating the current emotion with the voice data to obtain played voice; the virtual anchor role model is used for executing corresponding actions according to the action data; and also for superimposing the virtual anchor character model and the sound data into the captured video frame. The system can avoid the loss of the platform caused by personal reasons of live broadcast personnel as much as possible.

Description

Multi-mode-based virtual anchor system and method
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a multi-mode-based virtual anchor system and a multi-mode-based virtual anchor method.
Background
Currently, people have become a habit of entertainment of many people to watch live broadcast, and are willing to play the live broadcast to relax the entertainment after busy working and family life in one day.
At present, live broadcast platforms of fire explosion mainly depend on star anchor, and flow is obtained through the head anchor. However, these live anchor systems often have a number of uncontrollable factors for personal reasons, such as the anchor system may jump to another platform after obtaining popularity, the anchor system has limited efforts to continue high quality live broadcast for a long period of time, and so on. Once these uncontrollable factors develop in a negative direction, the live broadcast quality of the live broadcast platform is unstable, and the platform is lost.
Therefore, a multi-mode-based virtual anchor system and method are needed, which can keep the live broadcast quality of the live broadcast platform as stable as possible.
Disclosure of Invention
The invention aims to provide a multi-mode-based virtual anchor system and a multi-mode-based virtual anchor method, which can keep the live broadcast quality of a live broadcast platform stable as much as possible.
The basic scheme provided by the invention is as follows:
a multimodal-based virtual cast system, comprising:
the modeling unit is used for creating a corresponding virtual anchor role model according to the received role data information;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring deductive data of an operator, and the deductive data comprises action data, expression data and voice data;
the analysis unit is used for carrying out emotion analysis according to the deduction data to obtain the current emotion;
the storage unit is used for storing the shot video pictures;
the synthesis unit is used for virtualizing the sound data to obtain voice data and associating the current emotion with the voice data to obtain played voice; the virtual anchor role model is used for executing corresponding actions according to the action data; and the virtual anchor role model and the sound data are superposed in the shot video picture and then synthesized to obtain the virtual anchor role video picture.
Basic scheme theory of operation and beneficial effect:
after the deduction data of an operator is collected, the system does not directly combine the deduction data with a virtual main broadcasting model, but carries out emotion analysis according to the deduction data through an analysis unit to obtain the current emotion, and then associates the current emotion with voice data to obtain played voice. In this way, the stability and consistency of the virtual anchor can be enhanced. Because the tone and the intonation of each person are different, the voice is directly combined with the virtual anchor model after being changed, and the difference between different operators can be easily felt after the operators are changed. The live broadcast differentiation caused by the voice of the operators easily causes a plurality of different 'camps' to appear for the viewers of the same virtual anchor, namely, different operators are supported. This is not particularly different from the human anchor, but the individual influence of the human anchor is changed into the individual influence of the operator.
In the application, emotion analysis is carried out through the analysis unit according to the deduction data, after the current emotion is obtained, the current emotion is associated with the voice data, and played voice is obtained. The voice obtained in this way, no matter tone or tone, is set in advance, so even if different operators make content at different time intervals, the tone and tone are consistent during playing. The stability and consistency of the virtual anchor can be ensured.
Thus, a virtual anchor can be deduced by a plurality of persons alternately, and the problem that a real anchor is difficult to continuously carry out long-time high-quality live broadcast due to limited energy can be solved. Meanwhile, by using the system, the consistency of the virtual anchor can be ensured, even if the operation personnel of the virtual anchor jump the slot and other behaviors, the popularity of the back virtual anchor can not be influenced, and the platform can not be influenced.
In conclusion, the system can keep the live broadcast quality of the live broadcast platform stable as much as possible.
Further, an atmosphere adjusting library is arranged in the storage unit, and a plurality of atmosphere special effect packages are prestored in the atmosphere adjusting library; the synthesis unit is also used for matching a corresponding atmosphere special effect package according to the current emotion and synthesizing the atmosphere special effect package and the video picture of the anchor role.
The system can automatically match a corresponding atmosphere special effect package according to the current emotion to be synthesized with the video picture of the anchor role, so that the live broadcast effect is better.
Further, the analysis unit is also used for grading the current emotion; and when the rating of the current emotion is greater than the preset rating, the synthesis unit matches the corresponding atmosphere special effect package according to the current emotion.
The atmosphere special effect bag is used too densely, which can cause the discomfort of audiences in many times, and after the setting, the emotion grade is screened, and only when the emotion grade is larger than the preset grade, the atmosphere special effect bag is used. Thereby achieving the effect of automatically and reasonably using the atmosphere special effect bag.
Further, the device also comprises a supplementary unit used for inputting supplementary emotion; when the supplement unit inputs emotion supplement, the synthesis unit associates the supplement emotion with the voice data to obtain played voice.
Sometimes, emotional fluctuations may be severe for live effects; when the operator thinks that the current state of the operator can not express the emotion well, the supplementary emotion can be input through the supplementary unit. The sound effect is better when the virtual anchor broadcasts directly.
Further, the supplementary unit is also used for inputting the emotion level when inputting the supplementary emotion.
Facilitating better emotional effect.
Further, when the supplementary unit only inputs supplementary emotion and does not input emotion level, the synthesis unit associates the supplementary emotion with the voice data according to the preset emotion level to obtain played voice.
Further, the supplement unit is also used for calling the atmosphere special effect package; the synthesizing unit is also used for synthesizing the called atmosphere special effect package and the video picture of the anchor role after the supplementing unit calls the atmosphere special effect package.
The operation personnel can conveniently and timely carry out the baking support of the corresponding atmosphere according to the interaction with the audiences.
Further, there are a plurality of video pictures, and the supplementary unit is also used for selecting a video picture.
The video pictures are equivalent to the environment where the virtual anchor is located, the storage unit stores a plurality of video pictures, the characteristics of real-time synthesis of the virtual anchor can be effectively utilized through flexible selection of the supplement unit, the video pictures can be conveniently changed according to the actual requirements of live broadcast, and then the virtual anchor can be switched in different live broadcast environments.
Furthermore, the supplement unit is also used for adding and deleting the atmosphere special effect packages in the atmosphere adjusting library.
The atmosphere special effect package can be updated and upgraded through the supplement unit.
The invention provides a second basic scheme: the multi-mode based virtual anchor method uses the multi-mode based virtual anchor system.
Drawings
Fig. 1 is a logic block diagram of a first embodiment of the invention.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, the multi-modal based virtual anchor system includes a modeling unit, an acquisition unit, an analysis unit, a storage unit, a supplement unit, and a synthesis unit.
And the modeling unit is used for creating a corresponding virtual anchor role model according to the received role data information.
The acquisition unit is used for acquiring deductive data of an operator, wherein the deductive data comprises action data, expression data and voice data. Specifically, the body motion data of the operator can be captured in real time through the motion capture device, the facial expression motion data of the operator can be captured in real time through the facial expression catcher, and the sound data of the operator can be collected in real time through the sound pick-up.
The analysis unit is used for carrying out emotion analysis according to the deduction data to obtain the current emotion; the analysis unit is further adapted to rate the current mood.
The storage unit is used for storing the shot and recorded video pictures; an atmosphere adjusting library is also arranged in the storage unit, and a plurality of atmosphere special effect packages are prestored in the atmosphere adjusting library. The atmosphere special effect bag can be used for setting off the acousto-optic special effect of the atmosphere, and the live broadcast atmosphere can be effectively enhanced through reasonable use.
The supplementary unit is used for inputting supplementary emotions and also used for inputting emotion grades when the supplementary emotions are input.
The synthesis unit is used for virtualizing the sound data to obtain voice data and associating the current emotion with the voice data to obtain played voice; when the supplement unit inputs emotion supplement, the synthesis unit associates the supplement emotion with the voice data to obtain played voice; if the supplementary unit only inputs supplementary emotions and does not input emotion levels, the synthesis unit associates the supplementary emotions with the voice data according to preset emotion levels.
The synthesis unit is also used for associating the limb action and facial expression data with the virtual anchor role model, and enabling the virtual anchor role model to execute corresponding actions according to the action data; and the virtual anchor role model and the sound data are superposed in the shot video picture and then synthesized to obtain the virtual anchor role video picture.
And the synthesis unit is also used for matching a corresponding atmosphere special effect package when the rating of the current emotion is greater than the preset level or the emotion level of the supplementary emotion input by the supplement unit is greater than the preset level, and synthesizing the atmosphere special effect package and the video picture of the anchor role.
The specific implementation process is as follows:
and the modeling unit creates a corresponding virtual anchor role model according to the received role data information. After the acquisition unit acquires the deductive data of the operator, the deductive data is not directly combined with the virtual main broadcasting model, but emotion analysis is carried out through the analysis unit according to the deductive data to obtain the current emotion, and then the current emotion and the voice data are associated to obtain played voice. In this way, the stability and consistency of the virtual anchor can be enhanced.
Because the tone and the intonation of each person are different, the voice is directly combined with the virtual anchor model after being changed, and the difference between different operators can be easily felt after the operators are changed. The live broadcast differentiation caused by the voice of the operators easily causes a plurality of different 'camps' to appear for the viewers of the same virtual anchor, namely, different operators are supported. This is not particularly different from the human anchor, but the individual influence of the human anchor is changed into the individual influence of the operator. And the emotion analysis is carried out through the analysis unit according to the deduction data, and after the current emotion is obtained, the current emotion is associated with the voice data to obtain the played voice. The voice obtained in this way, no matter tone or tone, is set in advance, so even if different operators make content at different time intervals, the tone and tone are consistent during playing. The stability and consistency of the virtual anchor can be ensured.
Thus, a virtual anchor can be deduced by a plurality of persons alternately, and the problem that a real anchor is difficult to continuously carry out long-time high-quality live broadcast due to limited energy can be solved. Meanwhile, by using the system, the consistency of the virtual anchor can be ensured, even if the operation personnel of the virtual anchor jump the slot and other behaviors, the popularity of the back virtual anchor can not be influenced, and the platform can not be influenced.
When live broadcasting, the system can automatically match a corresponding atmosphere special effect package according to the current emotion to be synthesized with the video picture of the anchor role, so that the live broadcasting effect is better. However, atmosphere effect packages are used too densely and often cause a sense of discomfort to the viewer. The system screens the emotion level, and only when the emotion level is greater than a preset level, the emotion special effect bag is used. Thereby achieving the effect of automatically and reasonably using the atmosphere special effect bag.
In addition, sometimes the emotional fluctuation may be severe for the live effect; when the operator thinks that the current state of the operator can not express the emotion well, the supplementary emotion can be input through the supplementary unit. The sound effect is better when the virtual anchor broadcasts directly.
Another object of the present invention is to provide a multi-modal based virtual cast method, which uses the above multi-modal based virtual cast system.
Example two
Different from the first embodiment, in the present embodiment, there are a plurality of video frames, and the supplementary unit is further configured to select a video frame; the video pictures are equivalent to the environment where the virtual anchor is located, the storage unit stores a plurality of video pictures, the characteristics of real-time synthesis of the virtual anchor can be effectively utilized through flexible selection of the supplement unit, the video pictures can be conveniently changed according to the actual requirements of live broadcast, and then the virtual anchor can be switched in different live broadcast environments.
The supplement unit is also used for adding and deleting the atmosphere special effect packages in the atmosphere adjusting library, so that the atmosphere special effect packages can be updated and upgraded through the supplement unit.
In this embodiment, the supplementary unit is further configured to call the atmosphere special effect package; the synthesizing unit is also used for synthesizing the called atmosphere special effect package and the video picture of the anchor role after the supplementing unit calls the atmosphere special effect package. Due to the arrangement, the operation personnel can conveniently and timely perform the baking support of the corresponding atmosphere according to the interaction with the audience.
EXAMPLE III
In order to guarantee the live quality, need operating personnel to maintain high concentration attention, energy consumption is very big, when directly broadcasting, can appear sometimes because transient state is not good, leads to the staff can not carry out effective interdynamic with the barrage, and then leads to the unstable condition of live quality.
Different from the first embodiment, in the present embodiment, the apparatus further includes a bullet screen analysis unit, configured to perform matching degree analysis according to the bullet screen content and the deductive data collected by the collection unit, and modify the rating result of the analysis unit to adapt to the rest of the bullet screen content when the matching degree is lower than a first preset value as an analysis result;
when the analysis result is that the matching degree is lower than a second preset value, a deviation correcting signal is sent, wherein the deviation correcting signal comprises analysis of the current situation of the bullet screen and a suggested interaction mode; wherein the second preset value is smaller than the first preset value;
and the reminding unit is used for sending a reminder and displaying the content of the deviation correcting signal after receiving the second reminding signal.
The specific implementation process comprises the following steps:
by using the system, in the live broadcast process, the analysis unit can carry out matching degree analysis according to the barrage content and the deduction data acquired by the acquisition unit so as to analyze the current live broadcast quality.
When the matching degree of the analysis result is lower than the first preset value, it is indicated that the operator cannot effectively interact with the bullet screen due to poor state, and the live broadcast quality can be affected, so that the rating result of the analysis unit is corrected, and the analysis unit is matched with the bullet screen content. Like this, when operating personnel is not good because of the state, can carry out effectual optimization to the live interaction.
When the matching degree is lower than the second preset value according to the analysis result, the interactive atmosphere at the moment is proved to have problems, and the single optimization is difficult to achieve a satisfactory effect, so that the bullet screen analysis unit sends a deviation correction signal, and the reminding unit is used for sending a reminder and displaying the content of the deviation correction signal after receiving the second reminding signal. After paying attention to the content of the deviation rectifying signal, an operator can adjust the deviation rectifying signal in time to ensure the stability of the live broadcast quality.
By using the system, the stability of the live broadcast effect can be effectively improved.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (10)

1. A multi-modal based virtual cast system, comprising:
the modeling unit is used for creating a corresponding virtual anchor role model according to the received role data information;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring deductive data of an operator, and the deductive data comprises action data, expression data and voice data;
the analysis unit is used for carrying out emotion analysis according to the deduction data to obtain the current emotion;
the storage unit is used for storing the shot video pictures;
the synthesis unit is used for virtualizing the sound data to obtain voice data and associating the current emotion with the voice data to obtain played voice; the virtual anchor role model is used for executing corresponding actions according to the action data; and the virtual anchor role model and the sound data are superposed in the shot video picture and then synthesized to obtain the virtual anchor role video picture.
2. The multi-modality based virtual host system of claim 1, wherein: an atmosphere adjusting library is arranged in the storage unit, and a plurality of atmosphere special effect packages are prestored in the atmosphere adjusting library; the synthesis unit is also used for matching a corresponding atmosphere special effect package according to the current emotion and synthesizing the atmosphere special effect package and the video picture of the anchor role.
3. The multi-modality based virtual host system of claim 2, wherein: the analysis unit is also used for grading the current emotion; and when the rating of the current emotion is greater than the preset rating, the synthesis unit matches the corresponding atmosphere special effect package according to the current emotion.
4. The multi-modality based virtual host system of claim 3, wherein: the device also comprises a supplementary unit used for inputting supplementary emotion; when the supplement unit inputs emotion supplement, the synthesis unit associates the supplement emotion with the voice data to obtain played voice.
5. The multi-modality based virtual host system of claim 4, wherein: the supplementary unit is also used for inputting the emotion grade when inputting the supplementary emotion.
6. The multi-modality based virtual host system of claim 5, wherein: and when the supplementary emotion is only input by the supplementary unit and the emotion level is not input, the supplementary emotion is associated with the voice data by the synthesis unit according to the preset emotion level to obtain the played voice.
7. The multi-modality based virtual host system of claim 6, wherein: the supplement unit is also used for calling the atmosphere special effect package; the synthesizing unit is also used for synthesizing the called atmosphere special effect package and the video picture of the anchor role after the supplementing unit calls the atmosphere special effect package.
8. The multi-modality based virtual host system of claim 7, wherein: the video pictures are plural, and the supplementary unit is also used for selecting the video pictures.
9. The multi-modality based virtual host system of claim 8, wherein: the supplement unit is also used for adding and deleting the atmosphere special effect packages in the atmosphere adjusting library.
10. The multi-mode-based virtual anchor system is characterized in that: use of the multi-modal based virtual cast system of any of the above claims 1-9.
CN202011377505.1A 2020-11-30 2020-11-30 Multi-mode-based virtual anchor system and method Active CN112446938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011377505.1A CN112446938B (en) 2020-11-30 2020-11-30 Multi-mode-based virtual anchor system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011377505.1A CN112446938B (en) 2020-11-30 2020-11-30 Multi-mode-based virtual anchor system and method

Publications (2)

Publication Number Publication Date
CN112446938A true CN112446938A (en) 2021-03-05
CN112446938B CN112446938B (en) 2023-08-18

Family

ID=74739080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011377505.1A Active CN112446938B (en) 2020-11-30 2020-11-30 Multi-mode-based virtual anchor system and method

Country Status (1)

Country Link
CN (1) CN112446938B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187708A (en) * 2022-09-14 2022-10-14 环球数科集团有限公司 Virtual anchor role model and voice data superposition video recording system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004343781A (en) * 2000-01-21 2004-12-02 Ricoh Co Ltd Video content caption generating method, video content caption generating unit, digest video programming method, digest video programming unit, and computer-readable recording medium on which program for making computer perform method is stored
CN106910514A (en) * 2017-04-30 2017-06-30 上海爱优威软件开发有限公司 Method of speech processing and system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system
CN107197384A (en) * 2017-05-27 2017-09-22 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN107396144A (en) * 2017-06-30 2017-11-24 武汉斗鱼网络科技有限公司 A kind of barrage distribution method and device
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN108899050A (en) * 2018-06-14 2018-11-27 南京云思创智信息科技有限公司 Speech signal analysis subsystem based on multi-modal Emotion identification system
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110062267A (en) * 2019-05-05 2019-07-26 广州虎牙信息科技有限公司 Live data processing method, device, electronic equipment and readable storage medium storing program for executing
CN110519611A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, electronic equipment and storage medium
CN110688911A (en) * 2019-09-05 2020-01-14 深圳追一科技有限公司 Video processing method, device, system, terminal equipment and storage medium
CN111145777A (en) * 2019-12-31 2020-05-12 苏州思必驰信息科技有限公司 Virtual image display method and device, electronic equipment and storage medium
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
CN111489424A (en) * 2020-04-10 2020-08-04 网易(杭州)网络有限公司 Virtual character expression generation method, control method, device and terminal equipment
CN111538456A (en) * 2020-07-10 2020-08-14 深圳追一科技有限公司 Human-computer interaction method, device, terminal and storage medium based on virtual image
CN111970535A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Virtual live broadcast method, device, system and storage medium
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004343781A (en) * 2000-01-21 2004-12-02 Ricoh Co Ltd Video content caption generating method, video content caption generating unit, digest video programming method, digest video programming unit, and computer-readable recording medium on which program for making computer perform method is stored
CN106910514A (en) * 2017-04-30 2017-06-30 上海爱优威软件开发有限公司 Method of speech processing and system
CN107197384A (en) * 2017-05-27 2017-09-22 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system
CN107396144A (en) * 2017-06-30 2017-11-24 武汉斗鱼网络科技有限公司 A kind of barrage distribution method and device
CN107423809A (en) * 2017-07-07 2017-12-01 北京光年无限科技有限公司 The multi-modal exchange method of virtual robot and system applied to net cast platform
CN108899050A (en) * 2018-06-14 2018-11-27 南京云思创智信息科技有限公司 Speech signal analysis subsystem based on multi-modal Emotion identification system
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110062267A (en) * 2019-05-05 2019-07-26 广州虎牙信息科技有限公司 Live data processing method, device, electronic equipment and readable storage medium storing program for executing
CN110519611A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, electronic equipment and storage medium
CN110688911A (en) * 2019-09-05 2020-01-14 深圳追一科技有限公司 Video processing method, device, system, terminal equipment and storage medium
CN111145777A (en) * 2019-12-31 2020-05-12 苏州思必驰信息科技有限公司 Virtual image display method and device, electronic equipment and storage medium
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
CN111489424A (en) * 2020-04-10 2020-08-04 网易(杭州)网络有限公司 Virtual character expression generation method, control method, device and terminal equipment
CN111538456A (en) * 2020-07-10 2020-08-14 深圳追一科技有限公司 Human-computer interaction method, device, terminal and storage medium based on virtual image
CN111970535A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Virtual live broadcast method, device, system and storage medium
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187708A (en) * 2022-09-14 2022-10-14 环球数科集团有限公司 Virtual anchor role model and voice data superposition video recording system
CN115187708B (en) * 2022-09-14 2022-11-15 环球数科集团有限公司 Virtual anchor role model and voice data superposition video recording system

Also Published As

Publication number Publication date
CN112446938B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US9832516B2 (en) Systems and methods for multiple device interaction with selectably presentable media streams
US10432987B2 (en) Virtualized and automated real time video production system
CN108566558A (en) Video stream processing method, device, computer equipment and storage medium
CN110622240B (en) Voice guide generation device, voice guide generation method, and broadcasting system
WO2016088566A1 (en) Information processing apparatus, information processing method, and program
US20130236160A1 (en) Content preparation systems and methods for interactive video systems
JP6695482B1 (en) Control server, distribution system, control method and program
JPH10508784A (en) Interactive entertainment device
CN106792147A (en) A kind of image replacement method and device
CN106028078A (en) Personalized content creating method, personalized content creating device, personalized content play method and personalized content play device
CN112446938A (en) Multi-mode-based virtual anchor system and method
US20240048677A1 (en) Information processing system, information processing method, and computer program
CN106534618A (en) Method, device and system for realizing pseudo field interpretation
CN115515016A (en) Virtual live broadcast method, system and storage medium capable of realizing self-cross reply
US11310476B2 (en) Virtual reality image reproduction device for reproducing plurality of virtual reality images to improve image quality of specific region, and method for generating virtual reality image
CN107071322A (en) Video record and processing system and method
CN109102787A (en) A kind of simple background music automatically creates system
CN109862385A (en) Method, apparatus, computer readable storage medium and the terminal device of live streaming
CN105898435A (en) Data synchronizing method and device
Collie The business of TV production
CN112383793B (en) Picture synthesis method and device, electronic equipment and storage medium
US20220174258A1 (en) Information processing device, information processing method, and program
Smith Narrative styles in network coverage of the 1984 nominating conventions
CN105338413A (en) Method for displaying release information on video program and device thereof
CN105376655A (en) Method and device for displaying interactive information on video program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 401120 room 701, room 1, 7 / F, building 11, No. 106, west section of Jinkai Avenue, Yubei District, Chongqing

Patentee after: Space Shichuang (Chongqing) Technology Co.,Ltd.

Address before: 401121 17-4, building 2, No. 70, middle section of Huangshan Avenue, Yubei District, Chongqing

Patentee before: Chongqing Space Visual Creation Technology Co.,Ltd.

CP03 Change of name, title or address