CN113810729A - Live broadcast atmosphere special effect matching method, device, equipment and medium - Google Patents
Live broadcast atmosphere special effect matching method, device, equipment and medium Download PDFInfo
- Publication number
- CN113810729A CN113810729A CN202111088100.0A CN202111088100A CN113810729A CN 113810729 A CN113810729 A CN 113810729A CN 202111088100 A CN202111088100 A CN 202111088100A CN 113810729 A CN113810729 A CN 113810729A
- Authority
- CN
- China
- Prior art keywords
- atmosphere
- rendering
- live broadcast
- live
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000694 effects Effects 0.000 title claims abstract description 113
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000009877 rendering Methods 0.000 claims abstract description 154
- 230000002452 interceptive effect Effects 0.000 claims abstract description 64
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 23
- 230000008909 emotion recognition Effects 0.000 claims abstract description 12
- 230000003993 interaction Effects 0.000 claims description 53
- 230000008451 emotion Effects 0.000 claims description 26
- 238000003860 storage Methods 0.000 claims description 19
- 206010063659 Aversion Diseases 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 11
- 206010011469 Crying Diseases 0.000 claims description 9
- 206010047700 Vomiting Diseases 0.000 claims description 9
- 230000008673 vomiting Effects 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000012544 monitoring process Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000001960 triggered effect Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 5
- 230000036651 mood Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000002996 emotional effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010899 nucleation Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23412—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Acoustics & Sound (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Social Psychology (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the field of emotion recognition, and discloses a live broadcast atmosphere special effect matching method, a device, equipment and a medium, wherein the method comprises the following steps: when live broadcasting is carried out to an interactive node, obtaining live broadcasting data associated with the interactive node; the live broadcast data comprises picture information and voice information of a main broadcast; processing the live broadcast data through an atmosphere characteristic extraction model to generate atmosphere characteristics of the interactive nodes; obtaining an atmosphere rendering measure matched with the atmosphere characteristics; setting rendering parameters of atmosphere rendering measures according to the live broadcast data; and executing the atmosphere rendering measures according to the rendering parameters. According to the method, the live broadcast data are intelligently analyzed through an artificial intelligence technology, an optimal atmosphere rendering measure is matched, the atmosphere special effect of the live broadcast room can be automatically triggered, and the interestingness and the interactivity of the live broadcast room are improved.
Description
Technical Field
The invention relates to the field of emotion recognition, in particular to a live broadcast atmosphere special effect matching method, device, equipment and medium.
Background
In recent years, with the rapid development of network live broadcast technology, the live broadcast industry is increasingly vigorous. Live broadcast can be defined as the activity of the anchor in the live broadcast space to produce and publish information in real time in synchronization with the occurrence of events (show goods, show shows, sporting events, games, expert knowledge explanations, business conferences, etc.). The live broadcast has the characteristics of two-way, interactive, real-time, flexible and the like. The importance degree of the internet enterprises to the live broadcast shows an increasing trend.
The direct seeding starts from the exploration stage in 2008, enters the high-speed development stage in 2016, and enters the mature stage at present, the whole industry lasts for a short time, the main stream product function layer mainly solves the basic problem from inexistence to the original problem, and more polishing spaces are still left in the product detail aspect. Taking the atmosphere special effect function as an example, the existing atmosphere special effect is simply clicked by a director at present, and the matching of the atmosphere special effect cannot be realized.
Disclosure of Invention
Therefore, it is necessary to provide a live broadcast atmosphere special effect matching method, device, computer device and storage medium for solving the above technical problems, so as to automatically trigger the atmosphere special effect of the live broadcast room and improve the interest and interactivity of the live broadcast room.
A live broadcast atmosphere special effect matching method comprises the following steps:
when live broadcasting is carried out to an interactive node, obtaining live broadcasting data associated with the interactive node; the live broadcast data comprises picture information and voice information of a main broadcast;
processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
obtaining an atmosphere rendering measure matched with the atmosphere characteristics;
setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and executing the atmosphere rendering measures according to the rendering parameters.
A live broadcast atmosphere special effect matching device comprises:
the live broadcast data acquisition module is used for acquiring live broadcast data associated with an interaction node when live broadcast is carried out to the interaction node; the live broadcast data comprises picture information and voice information of a main broadcast;
the atmosphere feature extraction module is used for processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
the measure matching module is used for acquiring atmosphere rendering measures matched with the atmosphere features;
the parameter setting module is used for setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and the atmosphere rendering module is used for executing the atmosphere rendering measures according to the rendering parameters.
A computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the above live atmosphere special effects matching method when executing the computer readable instructions.
One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform a live ambience special effect matching method as described above.
According to the live broadcast atmosphere special effect matching method, the live broadcast atmosphere special effect matching device, the computer equipment and the storage medium, when live broadcast is carried out to an interactive node, live broadcast data associated with the interactive node are obtained; the live broadcast data comprises picture information and voice information of the anchor broadcast so as to rapidly analyze the atmosphere characteristics of the interaction nodes through proper live broadcast data. And processing the live broadcast data through an atmosphere characteristic extraction model, generating the atmosphere characteristics of the interaction nodes, and matching appropriate atmosphere rendering measures through the atmosphere characteristics. And obtaining an atmosphere rendering measure matched with the atmosphere characteristics so as to improve the atmosphere of the live broadcast room through a proper atmosphere rendering measure. And setting rendering parameters of the atmosphere rendering measures according to the live broadcast data, so that the atmosphere rendering measures are better adapted to the current live broadcast interactive nodes by adjusting the rendering parameters. And executing the atmosphere rendering measures according to the rendering parameters to generate an atmosphere special effect and improve the atmosphere of the live broadcast room. According to the method, the live broadcast data are intelligently analyzed through an artificial intelligence technology, an optimal atmosphere rendering measure is matched, the atmosphere special effect of the live broadcast room can be automatically triggered, and the interestingness and the interactivity of the live broadcast room are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application environment of a live broadcast atmosphere special effect matching method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a live atmosphere special effect matching method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a live atmosphere special effect matching apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. The embodiment can acquire live broadcast data through the special artificial intelligence chip.
The live broadcast atmosphere special effect matching method provided by the embodiment can be applied to an application environment as shown in fig. 1, where a client communicates with a server. The client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
In an embodiment, as shown in fig. 2, a live atmosphere special effect matching method is provided, which is described by taking the method applied to the server in fig. 1 as an example, and includes the following steps S10-S50.
S10, when the live broadcast is carried out to an interactive node, acquiring live broadcast data associated with the interactive node; the live data includes picture information and voice information of the anchor.
Understandably, live herein refers primarily to live activity hosted by a host. The interactive node may be a time node preset based on a live play, or a time node determined after intelligently judging a live scene (i.e., live data) through an interactive judgment rule. In other examples, a neural network model may be set, and a live video sample is trained through the neural network model to obtain a network model that can identify the interactive node. These live video samples are tagged with the interactive nodes.
The live data associated with the interactive node may refer to live data that is a period of time before the interactive node. The duration of the live data associated with different interactive nodes may not be the same. For example, the duration of live broadcast data associated with the game type interactive node may be a longer period of time, such as 5-10 minutes; the duration of the live broadcast data associated with the e-commerce interaction node may be a short period of time, such as 1-2 minutes.
The live data includes picture information and voice information of the anchor. The expression change, the limb state change and the like of the anchor can be identified through the picture information. The voice tone, the language fluency and the like of the anchor can be identified through the voice information.
S20, processing the live broadcast data through an atmosphere feature extraction model, and generating atmosphere features of the interactive nodes.
Understandably, the atmosphere feature extraction model may be composed of several sub-models. The atmosphere characteristic extraction model can process the live broadcast data through each sub-model to generate the atmosphere characteristics of the interactive nodes. In an example, the ambience feature extraction model includes an emotion recognition model and a speech semantic analysis model. The picture information in the live broadcast data can be processed through the emotion recognition model to generate the emotion type of the anchor. Meanwhile, voice information in live broadcast data can be processed through the voice semantic analysis model, and emotion keywords are generated. Here, the atmosphere characteristics include an emotion type and an emotion keyword.
And S30, acquiring atmosphere rendering measures matched with the atmosphere characteristics.
Understandably, an ambience rendering measure may refer to a rendering measure set for a particular live room ambience, including but not limited to animations and sound effects. Each atmosphere rendering measure may be paired with several atmosphere features. After determining the ambience feature, a corresponding ambience rendering measure may be retrieved. For example, when the atmosphere feature is "laugh", the matched atmosphere rendering measure may be "laugh animation + laugh sound effect".
And S40, setting rendering parameters of the atmosphere rendering measures according to the live broadcast data.
Understandably, the atmosphere rendering measures are generally a fixed section of animation and sound effect, and the direct embedding into the live broadcast may have an uncoordinated problem. Therefore, the environmental characteristics (such as emotion change trend) of the live broadcast data can be extracted, and the rendering parameters of the atmosphere rendering measures can be set according to the environmental characteristics extracted from the live broadcast data. Herein, the rendering parameters include, but are not limited to, rendering color, rendering speed, rendering intensity, and/or rendering duration. For example, the color of the animation may be set according to the environmental characteristics extracted from the live broadcast data, and if the environmental characteristics are more positive (such as emotional activity), a warm color may be selected; if the environmental characteristics are negative (e.g., emotional negatives, etc.), a cool-tone color may be selected.
And S50, executing the atmosphere rendering measures according to the rendering parameters.
Understandably, after the rendering parameters are set, atmosphere rendering measures can be executed according to the rendering parameters so as to improve the interest and interactivity of live broadcast. Here, the atmosphere special effect in live broadcast room is triggered automatically, and the anchor need not manually set up the special effect, can let the anchor concentrate on the show that attention put into the live broadcast, better control live rhythm.
In steps S10-S50, when live broadcast is carried out to an interactive node, live broadcast data associated with the interactive node are acquired; the live broadcast data comprises picture information and voice information of the anchor broadcast so as to rapidly analyze the atmosphere characteristics of the interaction nodes through proper live broadcast data. And processing the live broadcast data through an atmosphere characteristic extraction model, generating the atmosphere characteristics of the interaction nodes, and matching appropriate atmosphere rendering measures through the atmosphere characteristics. And obtaining an atmosphere rendering measure matched with the atmosphere characteristics so as to improve the atmosphere of the live broadcast room through a proper atmosphere rendering measure. And setting rendering parameters of the atmosphere rendering measures according to the live broadcast data, so that the atmosphere rendering measures are better adapted to the current live broadcast interactive nodes by adjusting the rendering parameters. And executing the atmosphere rendering measures according to the rendering parameters to generate an atmosphere special effect and improve the atmosphere of the live broadcast room. This embodiment carries out intelligent analysis to live data through artificial intelligence technique, matches the optimal atmosphere and renders up the measure, can trigger the atmosphere special effect of live room automatically, promotes the interest and the interactivity of live room.
Optionally, before step S10, before the acquiring live broadcast data associated with the interaction node when live broadcast is performed to the interaction node, the method further includes:
s11, acquiring historical live broadcast data with specified duration in a live broadcast room;
s12, processing the historical live broadcast data through an interactive judgment rule to generate an interactive judgment result;
and S13, if the interaction judgment result is that interaction is needed, determining that the live broadcast is carried out to an interaction node.
Optionally, the specified time period may be set according to actual needs, such as 10 minutes, 5 minutes, and the like. The historical live data may be live data of a specified duration just past, such as live data of ten minutes past.
The interaction evaluation rule can be set according to actual needs. And the interaction judgment result is used for determining whether the current live broadcast is interacted, if the interaction is needed, the live broadcast is considered to be carried out to the interaction node, and if the interaction is not needed, the live broadcast is considered not to be carried out to the interaction node. In some examples, the interaction evaluation rule may judge whether the current live broadcast reaches an interaction climax, and if the current live broadcast reaches the interaction climax, the interaction evaluation result indicates that interaction is required; and if the interaction climax is not reached, the interaction judgment result is that the interaction is not needed.
According to the embodiment, the interactive nodes can be intelligently identified, and the atmosphere special effect is added to the interactive nodes, so that the atmosphere of the live broadcast room is improved.
Optionally, in step S10, that is, when the live broadcast is performed to the interactive node, acquiring live broadcast data associated with the interactive node includes:
s101, acquiring an interaction type of the interaction node;
s102, acquiring an interactive data acquisition rule matched with the interactive type;
s103, acquiring the live broadcast data according to the interactive data acquisition rule.
Optionally, the interaction type may be set according to actual needs. For example, the interaction types may be classified into game, show, e-commerce, and the like according to the nature of the live broadcast. Different interaction data acquisition rules may be set for different interaction types. In some examples, the interactive data acquisition rules are used to determine the data type and length of time of the acquired live data. For example, for performance live broadcast, some sound indexes can be set according to the interactive data acquisition rule, so that the obtained live broadcast data is easier to be processed by the atmosphere feature extraction model, and the processing efficiency of the live broadcast data is facilitated.
Different interactive data acquisition rules generally have differences in the data types and time lengths of the acquired live data. For example, the e-commerce live broadcast mainly displays different commodities, and the display duration of each commodity is relatively limited, so the time length set by the interactive data acquisition rule can be shorter, such as 1 to 2 minutes, while the duration of the performance live broadcast works is longer, so the time length set by the interactive data acquisition rule can be longer, such as 3 to 4 minutes.
According to the embodiment, different live broadcast data can be selected according to different interactive nodes, and the recognition capability of the atmosphere characteristics can be further improved.
Optionally, the atmosphere feature extraction model includes an emotion recognition model and a speech semantic analysis model; the atmosphere features comprise emotion types and/or emotion keywords;
step S20, namely, the processing the live broadcast data through the atmosphere feature extraction model to generate the atmosphere features of the interactive node, includes:
s201, processing the picture information through the emotion recognition model to obtain the emotion type of the anchor;
s202, processing the voice information through the voice semantic analysis model to obtain the emotion keywords of the anchor.
Understandably, the atmosphere feature extraction model may be composed of several sub-models, such as an emotion recognition model and a speech semantic analysis model. The emotion recognition model can be used for processing each frame of picture information in the live broadcast data to obtain the live broadcast expression of each frame, and then the emotion type is determined according to the change of the anchor expression. The mood type may be excited, happy, sad, aversive, etc.
The voice semantic analysis model can process voice information in live broadcast data, firstly generates a main broadcast voice text, and then carries out semantic recognition on the voice text to generate emotion keywords. The emotional keywords may be happy, depressed, refractory, etc.
In the embodiment, live broadcast atmosphere features are extracted through two different dimensions (sound dimension and image dimension), so that the completeness and accuracy of the atmosphere features can be further improved (a single picture and a single sound are judged to have a large deviation).
Optionally, the atmosphere rendering measures comprise animation effects and/or music effects;
step S30, namely, the acquiring the atmosphere rendering measures matched with the atmosphere features includes:
s301, if the atmosphere feature is a first excitement feature, obtaining a cheering animation and/or cheering sound effect matched with the first excitement feature;
s302, if the atmosphere feature is a second excitement feature, obtaining a clapping animation and/or clapping sound effect matched with the second excitement feature;
s303, if the atmosphere feature is a first happy feature, obtaining a laugh animation and/or a laugh sound effect matched with the first happy feature;
s304, if the atmosphere feature is a second happy feature, obtaining smile animation and/or a smile sound effect matched with the second happy feature;
s305, if the atmosphere feature is a first difficulty feature, acquiring a crying animation and/or a crying sound effect matched with the first difficulty feature;
s306, if the atmosphere characteristic is a second ugly characteristic, acquiring a choking animation and/or a choking sound effect which are matched with the second ugly characteristic;
s307, if the atmosphere feature is a first aversion feature, acquiring a vomiting animation and/or a vomiting sound effect matched with the first aversion feature;
s308, if the atmosphere feature is the second aversion feature, acquiring the hiss animation and/or the hiss sound effect matched with the second aversion feature.
Understandably, the atmosphere rendering measures include, but are not limited to, animation effects and music effects. The animation effect refers to an animation special effect added to a live picture. The animation effect can be set according to actual needs, such as cheering animation, clapping animation and the like. The music effect refers to special music added to a live picture, such as applause music, clapping music and the like.
In some examples, the ambience features may be divided into eight types, a first excitement, a second excitement, a first happy, a second happy, a first perplexing, a second perplexing, a first aversion, a second aversion, respectively. Each atmosphere feature has a respective matching animation effect and/or music effect. For example, the first excitement-like feature matches cheering animations and/or cheering sound effects; the second excitation characteristic is matched with the clapping animation and/or clapping sound effect; the first happy class feature matches a laugh animation and/or a laugh sound effect; the second happy class feature matches the smiling animation and/or the smiling sound effect; the first hard class feature matches crying animation and/or crying sound effect; a second difficult-to-filter characteristic matches a choking animation and/or a choking sound effect; the first aversion feature matches a vomiting animation and/or a vomiting sound effect; the second aversion class feature matches hiss animation and/or hiss sound effect.
In this embodiment, different atmosphere rendering characteristics are matched for different atmosphere characteristics, and the atmosphere promotion requirements under different atmospheres can be met.
Optionally, the rendering parameters include rendering color, rendering speed, rendering intensity and/or rendering duration;
step S40, namely, the setting of the rendering parameters of the atmosphere rendering measures according to the live data includes:
s401, generating an emotion change trend according to the live broadcast data and the atmosphere characteristics;
s402, setting rendering color, rendering speed, rendering intensity and/or rendering duration of the atmosphere rendering measures according to the emotion change trend.
Alternatively, the mood variation trend may be a trend of anchor mood variation, such as from sadness to happiness, or from happiness to sadness. In other examples, mood trends may be related to the speed of the anchor, e.g., a faster speed of speech, the mood is considered stressed; when the speech rate becomes slow, the emotion is considered to become smooth.
Rendering color may refer to a color change of a painting effect, such as becoming warmer or cooler. The rendering speed may refer to a change in the playing speed of a pictorial effect and/or a musical effect, and at some times the animation effect is played faster and at other times the animation effect is played slower. The rendering intensity may be a color depth change of the animation effect, or a difference in the size of the animation character, or a volume of the music effect. The rendering duration may be a duration of an animation effect and/or a music effect.
In the embodiment, the atmosphere rendering measures are adjusted through the rendering parameters with different dimensions, so that the atmosphere rendering measures can manufacture better live broadcast atmosphere.
Optionally, in step S50, the executing the atmosphere rendering measure according to the rendering parameter includes:
s501, monitoring current live broadcast environment parameters;
s502, if the live broadcast environment parameter meets a preset rendering condition, executing the atmosphere rendering measure according to the rendering parameter so as to improve the live broadcast effect through the atmosphere rendering measure.
Understandably, the live environment parameter may be a preset scene prop or a sound password. The preset rendering condition can be set according to actual needs, such as repeatedly shaking the scene prop for N times or repeatedly shouting a preset mouth.
In some cases, for warm field needs, the anchor needs to reuse the atmosphere rendering measures, by setting the environment parameters, the overlapped playing of animation effect and music effect can be realized, and the atmosphere rendering measures can be changed along with the time.
In the embodiment, by monitoring the live broadcast environment parameters, the live broadcast environment parameters can be matched with the body language or the sound language of the anchor, so that the overlapped broadcast of the atmosphere special effect is realized, and the atmosphere of a live broadcast room is activated better.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a live broadcast atmosphere special effect matching device is provided, and the live broadcast atmosphere special effect matching device corresponds to the live broadcast atmosphere special effect matching method in the embodiment one to one. As shown in fig. 3, the live broadcast atmosphere special effect matching device includes a live broadcast data acquiring module 10, an atmosphere feature extracting module 20, a measure matching module 30, a parameter setting module 40, and an atmosphere rendering module 50. The functional modules are explained in detail as follows:
a live broadcast data acquiring module 10, configured to acquire live broadcast data associated with an interaction node when live broadcast is performed to the interaction node; the live broadcast data comprises picture information and voice information of a main broadcast;
the atmosphere feature extraction module 20 is configured to process the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interaction nodes;
a measure matching module 30, configured to obtain an atmosphere rendering measure matched with the atmosphere feature;
a parameter setting module 40, configured to set rendering parameters of the atmosphere rendering measures according to live broadcast data;
and the atmosphere rendering module 50 is used for executing the atmosphere rendering measures according to the rendering parameters.
Optionally, the module 10 for acquiring live data includes:
the system comprises a historical live broadcast data acquisition unit, a live broadcast data acquisition unit and a live broadcast data acquisition unit, wherein the historical live broadcast data acquisition unit is used for acquiring historical live broadcast data of a specified duration of a live broadcast room;
the interactive evaluation unit is used for processing the historical live broadcast data through an interactive evaluation rule to generate an interactive evaluation result;
and the interaction node determining unit is used for determining that the live broadcast is carried out to the interaction node if the interaction judgment result is that interaction is required.
Optionally, the module 10 for acquiring live data includes:
an interaction type obtaining unit, configured to obtain an interaction type of the interaction node;
the data acquisition rule acquisition unit is used for acquiring an interactive data acquisition rule matched with the interactive type;
and the live broadcast data acquisition unit is used for acquiring the live broadcast data according to the interactive data acquisition rule.
Optionally, the atmosphere feature extraction model includes an emotion recognition model and a speech semantic analysis model; the atmosphere features comprise emotion types and/or emotion keywords;
the atmosphere feature extraction module 20 includes:
the picture characteristic extraction unit is used for processing the picture information through the emotion recognition model to obtain the emotion type of the anchor;
and the voice feature extraction unit is used for processing the voice information through the voice semantic analysis model to obtain the emotion keywords of the anchor.
Optionally, the atmosphere rendering measures comprise animation effects and/or music effects;
the measure matching module 30 includes:
the cheering unit is used for acquiring cheering animations and/or cheering sound effects matched with the first excitement class characteristics if the atmosphere characteristics are the first excitement class characteristics;
the clapping unit is used for acquiring clapping animation and/or clapping sound effect matched with the second excitement characteristic if the atmosphere characteristic is the second excitement characteristic;
the laugh unit is used for acquiring a laugh animation and/or a laugh sound effect matched with the first happy class feature if the atmosphere feature is the first happy class feature;
the smiling unit is used for acquiring smiling animation and/or smiling sound effect matched with the second happy characteristic if the atmosphere characteristic is the second happy characteristic;
the large crying unit is used for acquiring large crying animation and/or large crying sound effect matched with the first difficult characteristic if the atmosphere characteristic is the first difficult characteristic;
a choking unit for obtaining a choking animation and/or a choking sound effect matching the second ugging characteristic if the atmosphere characteristic is the second choking characteristic;
the vomiting unit is used for acquiring a vomiting animation and/or a vomiting sound effect matched with the first aversion characteristic if the atmosphere characteristic is the first aversion characteristic;
a hiss unit for acquiring hiss animation and/or hiss sound effect matching the second aversion class feature if the ambience feature is the second aversion class feature.
Optionally, the rendering parameters include rendering color, rendering speed, rendering intensity and/or rendering duration;
the parameter setting module 40 includes:
the change trend unit is used for generating an emotion change trend according to the live broadcast data and the atmosphere characteristics;
and the parameter setting unit is used for setting the rendering color, the rendering speed, the rendering intensity and/or the rendering duration of the atmosphere rendering measure according to the emotion change trend.
Optionally, the ambience rendering module 50 includes:
the monitoring environment parameter unit is used for monitoring the current live broadcast environment parameters;
and the rendering executing measure unit is used for executing the atmosphere rendering measure according to the rendering parameters if the live broadcast environment parameter meets the preset rendering condition so as to improve the live broadcast effect through the atmosphere rendering measure.
For specific limitations of the live atmosphere special effect matching device, reference may be made to the above limitations on the live atmosphere special effect matching method, which are not described herein again. All modules in the live broadcast atmosphere special effect matching device can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a readable storage medium and an internal memory. The readable storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the readable storage medium. The database of the computer equipment is used for storing data related to the live broadcast atmosphere special effect matching method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions, when executed by a processor, implement a live ambience special effect matching method. The readable storage media provided by the present embodiment include nonvolatile readable storage media and volatile readable storage media.
In one embodiment, a computer device is provided, comprising a memory, a processor, and computer readable instructions stored on the memory and executable on the processor, the processor when executing the computer readable instructions implementing the steps of:
when live broadcasting is carried out to an interactive node, obtaining live broadcasting data associated with the interactive node; the live broadcast data comprises picture information and voice information of a main broadcast;
processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
obtaining an atmosphere rendering measure matched with the atmosphere characteristics;
setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and executing the atmosphere rendering measures according to the rendering parameters.
In one embodiment, one or more computer-readable storage media storing computer-readable instructions are provided, the readable storage media provided by the embodiments including non-volatile readable storage media and volatile readable storage media. The readable storage medium has stored thereon computer readable instructions which, when executed by one or more processors, perform the steps of:
when live broadcasting is carried out to an interactive node, obtaining live broadcasting data associated with the interactive node; the live broadcast data comprises picture information and voice information of a main broadcast;
processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
obtaining an atmosphere rendering measure matched with the atmosphere characteristics;
setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and executing the atmosphere rendering measures according to the rendering parameters.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to computer readable instructions, which may be stored in a non-volatile readable storage medium or a volatile readable storage medium, and when executed, the computer readable instructions may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A live broadcast atmosphere special effect matching method is characterized by comprising the following steps:
when live broadcasting is carried out to an interactive node, obtaining live broadcasting data associated with the interactive node; the live broadcast data comprises picture information and voice information of a main broadcast;
processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
obtaining an atmosphere rendering measure matched with the atmosphere characteristics;
setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and executing the atmosphere rendering measures according to the rendering parameters.
2. The live atmosphere special effect matching method according to claim 1, wherein before acquiring live broadcast data associated with an interaction node while live broadcast is proceeding to the interaction node, the method further comprises:
acquiring historical live broadcast data with specified duration in a live broadcast room;
processing the historical live broadcast data through an interactive evaluation rule to generate an interactive evaluation result;
and if the interaction judgment result is that interaction is needed, determining that the live broadcast is carried out to an interaction node.
3. The live atmosphere special effect matching method of claim 1, wherein the obtaining live data associated with an interaction node when live broadcast progresses to the interaction node comprises:
acquiring the interaction type of the interaction node;
acquiring an interactive data acquisition rule matched with the interactive type;
and acquiring the live broadcast data according to the interactive data acquisition rule.
4. The live atmosphere special effect matching method of claim 1, wherein the atmosphere feature extraction model comprises an emotion recognition model and a speech semantic analysis model; the atmosphere features comprise emotion types and/or emotion keywords;
processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes, wherein the atmosphere features comprise:
processing the picture information through the emotion recognition model to obtain the emotion type of the anchor;
and processing the voice information through the voice semantic analysis model to obtain the emotion keywords of the anchor.
5. The live ambience special effect matching method of claim 1, wherein the ambience rendering measure includes animation effects and/or music effects;
the acquiring of the atmosphere rendering measures matched with the atmosphere features comprises the following steps:
if the atmosphere feature is a first excitement-class feature, acquiring a cheering animation and/or cheering sound effect matched with the first excitement-class feature;
if the atmosphere feature is a second excitement feature, acquiring a clapping animation and/or clapping sound effect matched with the second excitement feature;
if the atmosphere feature is a first happy feature, obtaining a laugh animation and/or a laugh sound effect matched with the first happy feature;
if the atmosphere feature is a second happy feature, obtaining smiling animation and/or smiling sound effect matched with the second happy feature;
if the atmosphere feature is a first difficulty feature, acquiring a crying animation and/or a crying sound effect matched with the first difficulty feature;
if the ambience feature is a second ugly feature, acquiring a gag animation and/or a gag sound effect matching the second ugly feature;
if the atmosphere feature is a first aversion feature, acquiring a vomiting animation and/or a vomiting sound effect matched with the first aversion feature;
and if the atmosphere characteristic is a second aversion characteristic, acquiring a hiss animation and/or a hiss sound effect matched with the second aversion characteristic.
6. The live-air atmosphere special effect matching method according to claim 1, wherein the rendering parameters comprise rendering color, rendering speed, rendering intensity and/or rendering duration;
the setting of the rendering parameters of the atmosphere rendering measures according to the live broadcast data comprises the following steps:
generating an emotion change trend according to the live broadcast data and the atmosphere characteristics;
and setting the rendering color, the rendering speed, the rendering intensity and/or the rendering duration of the atmosphere rendering measure according to the emotion change trend.
7. The live-broadcast atmosphere special effect matching method according to claim 1, wherein the executing the atmosphere rendering measure according to the rendering parameters comprises:
monitoring the current live broadcast environment parameters;
and if the live broadcast environment parameter meets a preset rendering condition, executing the atmosphere rendering measure according to the rendering parameter so as to improve the live broadcast effect through the atmosphere rendering measure.
8. A live atmosphere special effect matching device is characterized by comprising:
the live broadcast data acquisition module is used for acquiring live broadcast data associated with an interaction node when live broadcast is carried out to the interaction node; the live broadcast data comprises picture information and voice information of a main broadcast;
the atmosphere feature extraction module is used for processing the live broadcast data through an atmosphere feature extraction model to generate atmosphere features of the interactive nodes;
the measure matching module is used for acquiring atmosphere rendering measures matched with the atmosphere features;
the parameter setting module is used for setting rendering parameters of the atmosphere rendering measures according to live broadcast data;
and the atmosphere rendering module is used for executing the atmosphere rendering measures according to the rendering parameters.
9. A computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the live ambience special effect matching method of any one of claims 1 to 7.
10. One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the live ambience special effect matching method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111088100.0A CN113810729B (en) | 2021-09-16 | 2021-09-16 | Live atmosphere special effect matching method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111088100.0A CN113810729B (en) | 2021-09-16 | 2021-09-16 | Live atmosphere special effect matching method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113810729A true CN113810729A (en) | 2021-12-17 |
CN113810729B CN113810729B (en) | 2024-02-02 |
Family
ID=78895677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111088100.0A Active CN113810729B (en) | 2021-09-16 | 2021-09-16 | Live atmosphere special effect matching method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113810729B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866791A (en) * | 2022-03-31 | 2022-08-05 | 北京达佳互联信息技术有限公司 | Sound effect switching method and device, electronic equipment and storage medium |
CN115361567A (en) * | 2022-07-07 | 2022-11-18 | 广州博冠信息科技有限公司 | Interaction method and device in live broadcast and electronic equipment |
CN115720279A (en) * | 2022-11-18 | 2023-02-28 | 杭州面朝信息科技有限公司 | Method and device for displaying any special effect in live scene |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106464939A (en) * | 2016-07-28 | 2017-02-22 | 北京小米移动软件有限公司 | Method and device for playing sound effect |
US20200090395A1 (en) * | 2018-09-13 | 2020-03-19 | International Business Machines Corporation | Animation generation |
CN111405307A (en) * | 2020-03-20 | 2020-07-10 | 广州华多网络科技有限公司 | Live broadcast template configuration method and device and electronic equipment |
CN111541908A (en) * | 2020-02-27 | 2020-08-14 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and storage medium |
CN112616063A (en) * | 2020-12-11 | 2021-04-06 | 北京字跳网络技术有限公司 | Live broadcast interaction method, device, equipment and medium |
-
2021
- 2021-09-16 CN CN202111088100.0A patent/CN113810729B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106464939A (en) * | 2016-07-28 | 2017-02-22 | 北京小米移动软件有限公司 | Method and device for playing sound effect |
US20200090395A1 (en) * | 2018-09-13 | 2020-03-19 | International Business Machines Corporation | Animation generation |
CN111541908A (en) * | 2020-02-27 | 2020-08-14 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and storage medium |
CN111405307A (en) * | 2020-03-20 | 2020-07-10 | 广州华多网络科技有限公司 | Live broadcast template configuration method and device and electronic equipment |
CN112616063A (en) * | 2020-12-11 | 2021-04-06 | 北京字跳网络技术有限公司 | Live broadcast interaction method, device, equipment and medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866791A (en) * | 2022-03-31 | 2022-08-05 | 北京达佳互联信息技术有限公司 | Sound effect switching method and device, electronic equipment and storage medium |
CN115361567A (en) * | 2022-07-07 | 2022-11-18 | 广州博冠信息科技有限公司 | Interaction method and device in live broadcast and electronic equipment |
CN115720279A (en) * | 2022-11-18 | 2023-02-28 | 杭州面朝信息科技有限公司 | Method and device for displaying any special effect in live scene |
CN115720279B (en) * | 2022-11-18 | 2023-09-15 | 杭州面朝信息科技有限公司 | Method and device for showing arbitrary special effects in live broadcast scene |
Also Published As
Publication number | Publication date |
---|---|
CN113810729B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113810729A (en) | Live broadcast atmosphere special effect matching method, device, equipment and medium | |
US10467792B1 (en) | Simulating communication expressions using virtual objects | |
CN107918653B (en) | Intelligent playing method and device based on preference feedback | |
CN109829039B (en) | Intelligent chat method, intelligent chat device, computer equipment and storage medium | |
CN109461437B (en) | Verification content generation method and related device for lip language identification | |
CN111667557B (en) | Animation production method and device, storage medium and terminal | |
CN111506794A (en) | Rumor management method and device based on machine learning | |
CN107895016A (en) | One kind plays multimedia method and apparatus | |
CN111444379B (en) | Audio feature vector generation method and audio fragment representation model training method | |
CN113780217A (en) | Live broadcast auxiliary prompting method and device, computer equipment and storage medium | |
JP2022020659A (en) | Method and system for recognizing feeling during conversation, and utilizing recognized feeling | |
CN114638232A (en) | Method and device for converting text into video, electronic equipment and storage medium | |
CN114138960A (en) | User intention identification method, device, equipment and medium | |
CN110781327B (en) | Image searching method and device, terminal equipment and storage medium | |
CN114286154A (en) | Subtitle processing method and device for multimedia file, electronic equipment and storage medium | |
CN115115753A (en) | Animation video processing method, device, equipment and storage medium | |
KR102441456B1 (en) | Method and system for mimicking tone and style of real person | |
CN113268635B (en) | Video processing method, device, server and computer readable storage medium | |
US11704585B2 (en) | System and method to determine outcome probability of an event based on videos | |
CN114422824A (en) | Data processing method, video processing method, display method and device | |
US20240320519A1 (en) | Systems and methods for providing a digital human in a virtual environment | |
CN116741143B (en) | Digital-body-based personalized AI business card interaction method and related components | |
CN113626622B (en) | Multimedia data display method in interactive teaching and related equipment | |
CN115237248B (en) | Virtual object display method, device, equipment, storage medium and program product | |
US20230259693A1 (en) | Automated Generation Of Commentator-Specific Scripts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |