CN113825031A - Live content generation method and device - Google Patents

Live content generation method and device Download PDF

Info

Publication number
CN113825031A
CN113825031A CN202111386008.2A CN202111386008A CN113825031A CN 113825031 A CN113825031 A CN 113825031A CN 202111386008 A CN202111386008 A CN 202111386008A CN 113825031 A CN113825031 A CN 113825031A
Authority
CN
China
Prior art keywords
live
target
live broadcast
text
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111386008.2A
Other languages
Chinese (zh)
Inventor
谢力群
黄齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
(Hangzhou) Technology Co.,Ltd.
Alibaba Dharma Academy
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202111386008.2A priority Critical patent/CN113825031A/en
Publication of CN113825031A publication Critical patent/CN113825031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

An embodiment of the present specification provides a live content generation method and apparatus, where the live content generation method is applied to a virtual live control system, and includes: live broadcasting is carried out in a live broadcasting room by utilizing the virtual character; acquiring a target event occurring in the live broadcast room; acquiring a corresponding live broadcast text based on the target event; scene construction processing is carried out on the live broadcast text based on a scene protocol processing rule, and the setting of a segment to be live broadcast corresponding to the live broadcast text is determined; based on the event type of the target event, setting and placing the segment to be live broadcast at a target playing position in the live broadcast waiting queue; responding to the target playing position where live broadcasting is carried out, and generating target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting; and sending the target live broadcast content to a client.

Description

Live content generation method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a live content generation method.
Background
In order to enhance the interest and interactivity of live broadcasting, avatar live broadcasting is an important part of live broadcasting services, and occupies an increasingly large proportion in live broadcasting services in recent years. In the live broadcast process, preset virtual images such as pandas and rabbits can be used for live broadcast instead of the actual images of the anchor. However, in the current live broadcast of the avatar, a pre-written script is usually live broadcast according to a pre-designed specific virtual scene and avatar, but for a temporary event occurring in the live broadcast process, such as a question of an audience, the virtual anchor cannot make corresponding feedback, so that the interactivity between the virtual anchor and the user is poor, and the viewing experience of the user is affected.
Disclosure of Invention
In view of this, the present specification provides a live content generation method. One or more embodiments of the present specification also relate to a live content generating apparatus, a computing device, and a computer-readable storage medium, so as to solve technical deficiencies in the prior art.
According to a first aspect of the embodiments of the present specification, there is provided a live content generation method applied to a virtual live control system, including:
live broadcasting is carried out in a live broadcasting room by utilizing the virtual character;
acquiring a target event occurring in the live broadcast room;
acquiring a corresponding live broadcast text based on the target event;
scene construction processing is carried out on the live broadcast text based on a scene protocol processing rule, and the setting of a segment to be live broadcast corresponding to the live broadcast text is determined;
based on the event type of the target event, setting and placing the segment to be live broadcast at a target playing position in the live broadcast waiting queue;
responding to the target playing position where live broadcasting is carried out, and generating target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting;
and sending the target live broadcast content to a client.
According to a second aspect of the embodiments of the present specification, there is provided a live content generation apparatus applied to a virtual live control system, including:
the live broadcasting module is configured to carry out live broadcasting in a live broadcasting room by utilizing the virtual character;
the event acquisition module is configured to acquire a target event occurring in the live broadcast room;
the text acquisition module is configured to acquire corresponding live text based on the target event;
the scene processing module is configured to perform scene construction processing on the live broadcast text based on a scene protocol processing rule and determine a to-be-live-broadcast segment setting corresponding to the live broadcast text;
a segment placement module configured to place the segment to be live-broadcasted at a target play position in the live-broadcasted waiting queue based on an event type of the target event;
the content generation module is configured to respond to the target playing position where live broadcasting is carried out, and generate target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting;
a content sending module configured to send the target live content to a client.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to execute the computer-executable instructions, wherein the processor implements the steps of the live content generation method when executing the computer-executable instructions.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of any one of the live content generation methods.
An embodiment of the present specification provides a live content generation method, which is applied to a virtual live control system, and includes: live broadcasting is carried out in a live broadcasting room by utilizing the virtual character; acquiring a target event occurring in the live broadcast room; acquiring a corresponding live broadcast text based on the target event; scene construction processing is carried out on the live broadcast text based on a scene protocol processing rule, and the setting of a segment to be live broadcast corresponding to the live broadcast text is determined; based on the event type of the target event, setting and placing the segment to be live broadcast at a target playing position in the live broadcast waiting queue; responding to the target playing position where live broadcasting is carried out, and generating target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting; and sending the target live broadcast content to a client.
Specifically, according to a target event which is determined to occur in a live broadcast room, scene construction processing is carried out on a live broadcast text which is obtained according to the target event, corresponding to-be-live-broadcast segment setting is generated, the to-be-live-broadcast segment setting is placed in different live broadcast waiting queues to be played according to the event type of the target event, the problem that when the target event is an inter-broadcast event, the inter-broadcast event can be responded timely, the to-be-live-broadcast segment setting corresponding to the inter-broadcast event is generated, inter-broadcast playing is carried out in the live broadcast room, namely, a virtual character can feed back the target event timely in the live broadcast room, interactivity between the virtual character and audiences can be enhanced, and watching experience of users is improved.
Drawings
Fig. 1 is a system architecture diagram of a live content generation method applied to a virtual live control system according to an embodiment of the present specification;
fig. 2 is a flowchart of a live content generation method provided in an embodiment of the present specification;
fig. 3 is a schematic structural diagram of a scene protocol constructor in a live content generation method according to an embodiment of the present specification;
fig. 4 is a schematic diagram of an addressing enqueue process of a live content generation method according to an embodiment of the present specification;
fig. 5 is a schematic processing procedure diagram of a live content generation method applied to a virtual live control system according to an embodiment of the present specification;
fig. 6 is a schematic structural diagram of a live content generation apparatus according to an embodiment of the present specification;
fig. 7 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
And (4) live broadcasting: and the presentation window of each real-time live stream corresponds to one live broadcast room. The live broadcast room is unique within a service platform.
Screenplay: and a pre-written live broadcast plan is used for guiding the live broadcast. 1) Which links are present; 2) respectively at what time; 3) what each link does & how long it takes; 4) what performances should be made; 5) what words are said; 6) what actions the anchor does; 7) how the surrounding follows the scene. This is determined by the scenario. The scenario is composed of a plurality of scenes, but the scenario is not bound with the anchor, namely, one scenario is the scenes, but different anchors can be live broadcast by the scenario.
Scene: the components of a virtual live room are also components of a scene, which (our defined abstract concept) is the smallest unit that can be used for live broadcasting, for example: introduction of a commodity is an independent scenario.
Fragment (b): the scene is composed of a plurality of segments, part of playing factors of the segments inherit the scene, and the segments are the minimum units capable of interrupting broadcasting.
Event: an event is a live-air presentation that is unrelated to the anchor person (e.g., a live-air reminder; or a live-air ambient special effect).
Rendering protocol: the rendering protocol is a custom driven digital human and live room rendering protocol. To drive the rendering of the live expressiveness.
The virtual human can live broadcast, and what is needed to be relied on is the drive of live broadcast content. The virtual main broadcast is required by a merchant to be live broadcast continuously all day by day under the condition of unmanned online operation. The virtual man is required to answer the consumer questions and interact with the consumer, so that the commodities of merchants can be introduced, and live broadcast is completed. In practical application, the live text content needs to be expressed, and finally becomes a live stream. The playing engine provides various playing capabilities such as sequence, timing, circulation, inter-cut and the like, and the live broadcast content of temporary decision generated by behavior can be expressed in time no matter according to a preset script or a live broadcast room event. In the live broadcast content generation method provided by the embodiment of the specification, the cloud playing engine for driving the virtual person to conduct live broadcast becomes a bridge for directly connecting text content and live broadcast stream, live broadcast content can be live broadcast in real time, timely and sequentially, and a live broadcast script written in advance can be delivered to a system by a merchant for management so as to make timely interactive return according to some temporary conditions in the live broadcast process, so that live broadcast of the virtual person to the merchant is realized. The method provides filling capability for idle time after the live anchor is released, and provides possibility for some merchants without the live anchor to catch the live air outlets.
In this specification, a live content generation method is provided, and the specification also relates to a live content generation apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 is a system architecture diagram illustrating a virtual live control system to which a live content generation method provided according to an embodiment of the present specification is applied.
Fig. 1 includes a producer a, a play engine B, and a client C (consumer), wherein the producer a includes a director system and a decision system; the playing engine B comprises sequential broadcasting, timing playing, uninterrupted playing, inter-cut playing, circular playing and breakpoint continuous playing; the client C may be understood as a client where multiple consumers watch live broadcast, and meanwhile, the playing engine B and the client C are connected through an IM channel, and a specific connection channel may be in various ways, which is not limited herein.
In specific implementation, the live broadcast content generation method provided in this specification provides a live broadcast scenario for virtual live broadcast through the director system, and can obtain live broadcast text content according to the live broadcast scenario, and meanwhile, the decision system can obtain text content that should be fed back to the live broadcast for an event that occurs in the current live broadcast room. When the producer a sends the text content of the scenario to the playing engine B, the playing engine B can arrange a specific playing position according to the text content, and can play the text content in sequence according to the scenario sequence, or realize content inter-cut according to the type of the event, and after the inter-cut live broadcast, the playing engine B can continue to play in sequence according to the scenario sequence, and finally the generated live broadcast content is sent to the client C through the connection channel.
In practical applications, the live broadcast content generation method provided in the embodiments of the present specification can implement that, in a virtual live broadcast process, for a barrage problem sent by an audience in a live broadcast room, a virtual character of the virtual live broadcast can interact or feed back the barrage problem with the audience in time, thereby implementing an inter-cut function in the live broadcast process, so as to improve the viewing experience of the audience in the virtual live broadcast room.
In the live broadcast content generation method provided in the embodiment of the present specification, a broadcast engine in a virtual live broadcast control system is used to process live broadcast text content sent by a producer, and live broadcast content is generated in different broadcast manners for different live broadcast texts, so that the live broadcast content is interrupted and continuously broadcast in a live broadcast room, and accordingly, content feedback is provided to audiences in time, and the interestingness of interaction between the audiences and a virtual anchor is increased.
It should be noted that the live content generation method provided in the embodiments of the present specification can be applied to an e-commerce virtual live scene, a game virtual live scene, an education virtual live scene, an animation virtual live scene, a social virtual live scene, and the like. For convenience of understanding, the live content generation method provided in the embodiment of the present specification takes an e-commerce virtual live scene as an example, and a specific live content generation method is described in detail.
Referring to fig. 2, fig. 2 is a flowchart illustrating a live content generation method according to an embodiment of the present specification, which specifically includes the following steps.
Step 202: and carrying out live broadcast in a live broadcast room by utilizing the virtual character.
The live broadcast content generation method provided by the embodiment of the specification is applied to live broadcast of virtual characters, so that the virtual characters can be ensured to be live broadcast continuously all day long, the live broadcast interest of the virtual characters can be increased, the time gap of live broadcast of the real character anchor can be filled, audiences can view commodities sold in a live broadcast room or other live broadcast contents at any time, and it needs to be explained that the live broadcast content generation method is not limited to the live broadcast specific content, can be live broadcast commodities, can also be live broadcast events and the like.
Step 204: and acquiring a target event occurring in the live broadcast room.
The target events can be understood as two types of events in the specification, wherein the first type is inter-cut events, for example, emergency events such as answering a bullet screen question, triggering a red packet based on a bullet screen password, playing a game and the like need to be inter-cut in the normal live sequence process; the second category is sequential events, such as events that are played in order of explaining a good or event, dancing, speaking, etc. in a script.
In practical application, the playing engine may obtain a target event occurring in a current live broadcast room, for example, at least one user in a current comment area of the live broadcast room sends a red packet password, and when the number of the red packet passwords meets a preset number threshold, a red packet sending mechanism in the live broadcast room may be triggered, and then the target event occurring in the current live broadcast room is obtained as a red packet sending event and is an inter-cut and timely processing event.
Step 206: and acquiring a corresponding live text based on the target event.
In practical application, after a target event occurs in a current live broadcast room, a live broadcast text corresponding to the target event can be acquired according to the target event, so that a virtual character is controlled subsequently to generate corresponding live broadcast content according to the live broadcast text, the live broadcast content is displayed in the live broadcast room, and the target event can be fed back timely.
It should be noted that, the introduction of the target event in the embodiments of the present specification is divided into two types, where the first type of target event is an inter-cut event; the second type target event is a sequential playing event; for different target events, the playback engine distinguishes the live content processing process corresponding to the target event, and the following embodiments will be described in detail.
In order to enable a virtual character to process an event occurring in a live broadcast room in time and enable a user to obtain timely feedback, the live broadcast content generation method provided by the embodiment of the specification acquires text content corresponding to the target event through a preset text database, so that the text content can be conveniently processed subsequently; specifically, the target event is a spot event,
correspondingly, the obtaining of the corresponding live text based on the target event includes:
and acquiring the inter-cut text corresponding to the inter-cut event in a preset text database based on the inter-cut event.
The preset text database can be understood as a reply text which is pre-stored in the database by a merchant and corresponds to an event, and the virtual character can perform deduction in the live broadcast room according to the corresponding reply text.
In specific implementation, when a target event is determined to be a spot event in a live broadcast room, a spot text corresponding to the spot event is acquired from a preset text database according to specific content of the spot event.
For example, when the break event is a red envelope-containing event, the play engine may receive a red envelope-containing comment from a comment area of the live broadcast room, that is, obtain a keyword "red envelope" from the comments, and search for a live text matching the keyword "red envelope" from a preset text database according to the keyword, such as a live speech of the virtual host for the red envelope-containing event.
It should be noted that, the manner of obtaining the live text corresponding to the target event is not limited in this embodiment, and may be obtained by matching the keyword in the text database, or by using a text query model, and other manners are not specifically described in this embodiment.
In the live content generation method provided in the embodiment of the present specification, by determining the event type of the target event, the break-in text corresponding to the event is obtained in the preset text database according to the event type, so that the corresponding virtual character break-in content is generated according to the break-in text in the following process and is displayed in a live room.
Step 208: and carrying out scene construction processing on the live broadcast text based on a scene protocol processing rule, and determining the setting of the segment to be live broadcast corresponding to the live broadcast text.
The scene protocol processing rule may be understood as a processing rule for adding live broadcast scene data to a live broadcast text, for example, adding an acquired live broadcast text to processing rules for scene construction, scene segmentation, scene assembly, and the like.
The setting of the segment to be live broadcast can be understood as configuration data of a live broadcast text after scene data is added, and the configuration data comprises the live broadcast text, a live broadcast scene and other data.
In practical application, after the live broadcast text is acquired from the preset text database, scene construction processing can be performed on the live broadcast text according to the scene protocol processing rule, scene data of live broadcast is added, and setting information of a segment to be live broadcast corresponding to the live broadcast text is determined.
Further, the scene construction processing is performed on the live broadcast text based on the scene protocol processing rule, and the setting of the segment to be live broadcast corresponding to the live broadcast text is determined, including:
determining inter-cut scene data matched with the inter-cut text from a preset scene database based on the inter-cut text;
and constructing and processing the inter-cut scene data and the inter-cut text based on the scene protocol processing rule, and determining the setting of the segment to be live-cut corresponding to the inter-cut text.
In practical application, different text contents may correspond to different playing scenes, wherein the playing scenes may be understood as a front scene, a live background, and the like. The playing engine may determine, according to the break-in text, break-in scene data matched with the break-in text from a preset scene database, where it should be noted that the preset scene database is a scene database that is set by a merchant according to different live broadcast application scenes and needs to be configured.
Furthermore, the playing engine may perform construction processing on the inter-cut scene data and the inter-cut text based on a scene protocol processing rule, specifically, the construction processing may include scene construction, scene segmentation, dynamic segment expansion, behavior analysis of a live text, an expressive force component (a flower character, a subtitle, a special effect, a lens, and the like) increase, processing the data, and finally determining a segment setting to be live corresponding to the inter-cut text, where the segment setting to be live may be playing configuration data after the scene protocol processing.
Referring to fig. 3, fig. 3 is a schematic structural diagram illustrating a scene protocol constructor in a live content generation method provided by an embodiment of the present specification.
Fig. 3 can be divided into three parts, where the first part is scene playing data, the second part is a protocol construction process, and the third part is a playing engine playing protocol.
In practical application, the scene playing data can be obtained from a database to the data of the scenario, scenario scene, resource, room and the like corresponding to the target event; the protocol construction process comprises the steps of carrying out parameter verification, material analysis, scene construction, scene segmentation, fragment dynamic amplification, behavior analysis, expressive force component expansion, axis beating and adaptation rendering protocol on scene playing data, and carrying out scene assembly; finally, at least one rendering protocol fragment is generated in the third portion. The rendering protocol segment may be understood as data capable of generating content to be live broadcast, and it should be noted that specific protocol data is not specifically limited in this embodiment.
It should be noted that, in this embodiment, only a specific processing procedure in the scene protocol constructor is simply introduced, in an actual application, the construction processing of adding the live text into the scene is not limited to the description in this embodiment, and other processing manners may also be adopted to perform the protocol construction processing on the scene playing data, so as to blend the live text into the corresponding scene data and control the virtual character to speak the live text in the corresponding scene for live broadcasting.
In the live content generation method provided in the embodiment of the present specification, the inter-cut scene data matched with the live text is acquired from the scene database, the inter-cut scene data and the inter-cut text are constructed and processed according to the scene protocol processing rule, and the scene data and the text are fused to determine the setting of the segment to be live corresponding to the inter-cut text, so that a subsequent playing engine can generate the content to be live corresponding to the inter-cut text, and a virtual character can perform inter-cut on the content to be live.
Step 210: and setting and placing the segment to be live broadcast at a target playing position in the live broadcast waiting queue based on the event type of the target event.
In this embodiment, the double-queue buffer area is divided into two queues according to a priority playing sequence, one is a priority queue, and the other is a normal queue. Specifically, the inter-cut content may be placed in a priority queue for inter-cut playing, and the sequential playing content may be placed in a normal queue for sequential playing.
In specific implementation, whether the target event is a cut-in event or a sequential playing event can be determined according to the event type of the target event, and then the target playing position where the segment to be live is set and placed in the live broadcasting waiting queue is determined, so that a subsequent live broadcasting room can respond to the target playing position to play the generated live broadcasting content.
In practical application, when the event type of the target event is determined to be the inter-cut type, the segment to be live-broadcast can be set and placed in a priority queue in a double-queue buffer zone to wait for being broadcast; specifically, the setting and placing the to-be-live-broadcast clip at the target play position in the live broadcast waiting queue based on the event type of the target event includes:
under the condition that the event type of the target event is determined to be a first event type, determining a live broadcast waiting queue corresponding to the segment to be live broadcast in a queue buffer area as an inter-cut waiting queue;
and setting the segment to be live broadcast at a target playing position of the inter-cut waiting queue.
The first event type can be understood as an inter-cut type, and represents that a to-be-live segment corresponding to the target event is set to be live in an inter-cut mode.
In practical application, under the condition that the event type of a target event is determined to be an inter-cut type, setting the segments to be live broadcast corresponding to the target event to be placed in an inter-cut waiting queue in a double-queue buffer area for waiting, wherein the inter-cut waiting queue can be understood as a priority queue, and the setting of the segments to be live broadcast placed in the priority queue indicates that the current live broadcast content needs to be interrupted and the segments to be live broadcast placed in the priority queue are inter-cut in the live broadcast process of the current live broadcast content. If there may be at least one segment to be live-broadcasted in the priority queue, it is necessary to determine that the segment to be live-broadcasted corresponding to the current target event needs to be placed in the priority queue, and it is necessary to determine that the segment to be live-broadcasted is placed in a specific target playing position, for example, the segment to be live-broadcasted is placed in a first playing position in the priority queue, or the segment to be live-broadcasted is placed in a second playing position behind the first playing position in the priority queue.
In the live content generation method provided in the embodiment of the present specification, a to-be-live clip is determined to be placed in a live waiting queue according to an event type of a target event, and a target play position is determined in the live waiting queue, so that a subsequent live broadcast room responds to a current target play position to realize inter-play of the to-be-live clip.
The playing engine can also determine the specific inter-cut position of the inter-cut content in the inter-cut waiting queue, and two inter-cut types are provided in the embodiment, wherein the first type is an in-time inter-cut type which can be understood that the response to the problem in the comment area is to be in-time inter-cut if the virtual character has not explained the current commodity, so that timeliness is embodied; the second type is a non-timely inter cut type, which can be understood that although inter cut processing is performed on inter cut content, the inter cut content is not timely inter cut and can be inter cut after the current commodity explanation is finished and before the explanation introduction of the next commodity is started; based on this, the two types of insertion are different in the playing position where the insertion content is placed; specifically, the setting and placing the segment to be live broadcast at the target play position of the inter-cut waiting queue includes:
judging whether the priority set by the segment to be live broadcast is a target priority,
if yes, placing the to-be-live-broadcast clip at a first target playing position of the inter-broadcast waiting queue;
if not, the to-be-live-broadcast clip is placed at a second target playing position of the inter-broadcast waiting queue.
The target priority can be understood as a high priority for setting the segment to be live broadcast to be inserted into the live broadcast room in time for playing.
The first target playing position can be understood as a position where the setting of the segment to be live needs to be immediately played in an inter-cut mode, and the position where the virtual character in the current live broadcasting room is playing the content in the live broadcasting process can be interrupted.
The second target playing position can be understood as a position where the live broadcast segment to be played is set and the inter-cut playing can be carried out after the playing of the set content of the current live broadcast segment is finished; that is, the setting of the segment to be live broadcast placed at the second target playing position can be played only after the playing of the segment to be live broadcast placed at the first target playing position is finished.
In practical application, the playing engine further needs to judge whether the priority set by the segment to be live is a high priority which needs to be played in a live broadcast room in time in an inserting manner, if so, the segment to be live can be placed at a first target playing position, then, the setting of the segment to be live at the first target playing position can be responded at once, and the inserting playing can be carried out in the live broadcast room. For example, if the priority set by the segment to be live of the password red packet event placed in the inter-cut waiting queue is determined as the target priority, the segment to be live may be placed at the first target playing position, when the live broadcasting room needs to respond to the next segment to be live, the live broadcasting content being played by the virtual character in the current live broadcasting room may be interrupted in time, the segment to be live corresponding to the password red packet event may be set, for example, the process of normally explaining the commodity a may be interrupted, and then the process of playing the virtual character red packet event in the live broadcasting room may be interrupted, for example, the virtual character may say "everyone is ready, and immediately needs to red packet cheer! ".
After the instant insertion type to-be-live-broadcast segment is set for the instant insertion, if another to-be-live-broadcast segment is set after the to-be-live-broadcast segment of the instant insertion type is set in the priority queue, the other to-be-live-broadcast segment can be set as a non-instant insertion type for playing; in practical application, when it is determined that the priority set for the segment to be live is not the target priority, the content that is not inserted in time can be understood, and then the segment to be live is set and placed at the second target playing position in the insertion waiting queue, so that the segment to be live placed at the second target playing position is inserted after the explanation of the current live segment in the current live broadcasting room is finished.
For example, when answering the break event of barrage question, when the direct broadcast room is explaining commodity a, there are a plurality of spectators to ask questions about this commodity a's using method in the barrage, then, the broadcast engine when confirming that the priority to the event of answering the barrage question is not the type of in time breaking, can place the second target broadcast position in break waiting queue with the section setting of waiting to live that should answer the barrage question event correspondence, make this section setting of waiting to live finish explaining at present virtual character to this commodity a, unify and answer the barrage question, for example, carefully explain commodity a's specific using method again.
It should be noted that, in a manner of determining the specific priority of the inter-cut event occurring in the live broadcast room, the priorities of different inter-cut events may be determined according to different application scenarios, and this is not limited in this embodiment of the present specification.
Further, after the virtual character finishes playing the break-in event, the playing engine may continue to play the interrupted video playing content, and continuing with the above example, the video playing content that continues to be played may be the explanation content of the article a. In practical application, after the live broadcast content of the inter-cut event is played, the broadcast engine can continuously judge whether the inter-cut event waiting for being played still exists in the priority queue, and if the inter-cut event exists in the priority queue, namely the inter-cut event exists in the first target broadcast position and the segment setting waiting for being live is displayed, the setting of the segment waiting for being live, which is placed in the first target broadcast position in the priority queue, can be continuously responded; if no inter-cut event exists in the priority queue, namely the first target playing position does not have the setting of the segment to be live-broadcasted, the virtual character in the live broadcast room can normally play the content to be broadcasted without responding to the first target playing position in the priority queue.
In the live broadcast content generation method provided in the embodiment of the present specification, by determining the priority of the setting of the segment to be live broadcast, a specific target play position in the inter-cut waiting queue is determined, and the segment to be live broadcast is set and placed at the corresponding target play position, so that it is convenient to subsequently respond to the setting of the segment to be live broadcast at the target play position, and inter-cuts of different types of content to be live broadcast in a live broadcast room are realized.
Based on the above, the two play modes of the inter-cut events can meet the inter-cut play set by the segment to be live broadcast, and determine the position of the segment to be live broadcast according to different play positions in the priority queue in the double-queue buffer area to perform inter-cut so as to adapt to different inter-cut requirements; that is to say, in the current live broadcast room, if the virtual character is required to set the segment to be played in time for inter-cut, the segment to be live broadcast is set and placed at the first target playing position, and if the virtual character is required to set the segment to be played in non-time for inter-cut, the segment to be live broadcast is set and placed at the second target playing position.
In addition, when the target event is determined to be the sequential playing event, the sequential playing event can be played in the live broadcast room by the virtual character according to the script sequence preset by the merchant; specifically, the target event is a sequential play event,
correspondingly, the obtaining of the corresponding live text based on the target event includes:
and acquiring a sequential playing text which is played sequentially in a playing sequence preset in a preset text database based on the sequential playing event in the preset text database.
The specific meaning of the preset text database can refer to the explanation of the preset text database in the above embodiment, which is not described herein again; the sequentially played text may be understood as a text that is previously stored in a preset text database and played in a time sequence.
In practical applications, when the target event is determined to be a sequential playing event, the playing engine obtains a sequential playing text corresponding to the sequential playing event from a preset text database according to specific content of the sequential playing event, for example, the sequential playing text can be searched from the preset text database according to content of a keyword, and a deductive sequential playing text applicable to a virtual character can also be obtained by adopting other manners.
For example, the sequential play event is a play event, and the play engine may determine, from a script pre-designed by a merchant, a play text of the play corresponding to the play event in a preset text database, for example, a text content that a virtual character should express in hosting the play in a live broadcast.
In the live content generation method provided in the embodiment of the present specification, by determining that a target event is a sequential play event, a sequential play text corresponding to the sequential play event is acquired from a preset text database, so that content of a corresponding virtual character sequentially played based on the sequential play text is generated and displayed in a live room.
Further, the playing engine can process the scene data played in the live broadcast according to the scene protocol processing rule so as to determine the setting of the segment to be live broadcast corresponding to the sequence playing event; specifically, the scene construction processing is performed on the live broadcast text based on the scene protocol processing rule, and the setting of the segment to be live broadcast corresponding to the live broadcast text is determined, including:
determining sequential playing scene data matched with the sequential playing text from a preset scene database based on the sequential playing text;
and constructing and processing the sequential playing scene data and the sequential playing text based on the scene protocol processing rule, and determining the setting of the segment to be live broadcast corresponding to the sequential playing text.
In specific implementation, the playing engine determines sequential playing scene data matched with the sequential playing text from a preset scene database according to the determined sequential playing text, and simultaneously constructs and processes the sequential playing scene data and the sequential playing text according to a scene protocol processing rule, and finally determines the setting of a segment to be live broadcast corresponding to the sequential playing text.
For example, if the sequentially played text is a live text for playing a game, scene data required for playing the game can be acquired from a preset scene database, the scene data for playing the game and the text content of the game are constructed and processed based on a scene protocol processing rule, the construction of the scene data for playing the game and the text content of the game comprise scene construction, scene segmentation, fragment dynamic amplification, addition of a expressive force component and the like, and finally, the setting of a segment to be live broadcast corresponding to the sequentially live text for playing the game is determined, so that data can be configured for playing after the scene protocol processing is performed.
It should be noted that, in the present embodiment, for the descriptions of the preset scene database and the scene protocol processing rule, reference may be made to the description part for processing the inter-cut text to generate the setting of the segment to be live broadcast, which is not described herein in detail.
In the live broadcast content generation method provided in the embodiment of the present specification, sequential play scene data matched with a live broadcast text is acquired from a scene database, the sequential play scene data and the sequential play text are constructed and processed according to a scene protocol processing rule, and the scene data and the text are fused to determine who is a segment to be live broadcast corresponding to the sequential play text, so that a subsequent play engine generates content to be live broadcast corresponding to the sequential play text, and a virtual character plays the content to be live broadcast sequentially.
Furthermore, the playing engine can set the segments to be live-played generated by the sequential playing text in a sequential live-playing waiting queue in the live-playing waiting queue so as to realize sequential playing of the sequential playing text content in the live-playing room; specifically, the setting and placing the to-be-live-broadcast clip at the target play position in the live broadcast waiting queue based on the event type of the target event includes:
under the condition that the event type of the target event is determined to be a second event type, determining a live broadcast waiting queue corresponding to the to-be-live-broadcast segment in a queue buffer area as a sequential live broadcast waiting queue;
and placing the segment to be live broadcast at the target playing position of the sequential live broadcast waiting queue.
And the second event type is a sequential playing type and represents that the to-be-live-played clip corresponding to the target event is set to be live-played in a sequential playing mode.
A sequential live wait queue may be understood as a normal queue in a double queue buffer.
In practical application, under the condition that the event type of the target event is determined to be a sequential playing type, the segment to be live broadcast corresponding to the target event is set and placed in a sequential playing waiting queue in a double-queue buffer area to wait, the sequential playing waiting queue can be understood as a common queue in the double-queue buffer area, and the segment to be live broadcast in the common queue can be set and played in a live broadcast room according to the queue sequence only after no segment to be live broadcast is set in a priority queue in the double-queue buffer area.
In the live broadcast content generation method provided in the embodiment of the present specification, the to-be-live broadcast clip of the sequential broadcast type is set at the target broadcast position of the sequential live broadcast waiting queue in the live broadcast waiting queue, so that when there is no inter-cut event in the live broadcast room, the to-be-live broadcast clip can be set in the live broadcast room according to the arrangement sequence in the sequential live broadcast waiting queue for sequential broadcast.
In addition, referring to fig. 4, fig. 4 is a schematic diagram illustrating an addressing enqueue process of a live content generation method provided by an embodiment of the present specification.
FIG. 4 shows a priority queue, a normal queue and a current play queue, wherein the priority queue has B1, A1, D1 and D2; the common queue is provided with B2 and B3; the current play queue is B1, A1, D1, D2, B2, B3; it should be noted that each identifier in the queue may represent a to-be-played clip, and in order to facilitate the to-be-played clip being queued for play, the to-be-played clip may be distinguished by the identifier.
According to the priority characteristics of the double-queue buffer area, when the priority queue has the segment to be played, the segment in the priority queue needs to be played in an inter-cut mode, when the priority queue does not have the content to be played, the segment to be played in the common queue can be played, and when the segment to be played in the priority queue is played, the segment to be played in the common queue is in a waiting state. As can be seen from fig. 4, B1 is arranged in the priority queue in the first order, B1 should be played first, B1 is seen when the current play queue is arranged first, and a1, a D1 and a D2 are arranged in the priority queue later and wait for inter-cut, so that after B1 in the current play queue, a1, D1 and D2 can be played according to the order, and after all the segments to be live broadcast in the priority queue are played, B2 and B3 in the normal queue can be played, so that the content needing inter-cut can be expressed more quickly.
Step 212: and responding to the target playing position where live broadcasting is carried out, and generating target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting.
In practical application, no matter which step the content currently played in the live broadcast room is carried out, as long as the priority queue in the double-queue buffer area has the setting of the segment to be live broadcast, the target playing position where the live broadcast is carried out can be responded, and the target live broadcast content of the virtual character can be generated according to the setting of the segment to be live broadcast.
Further, the generating of the target live content of the virtual character according to the setting of the segment to be live broadcast includes:
performing axis-taking processing on the setting of the segment to be live broadcast and the live broadcast text based on a preset algorithm model, and determining a matching relation between the setting of the segment to be live broadcast and the live broadcast text;
and generating target live broadcast content of the virtual character based on the matching relation.
The preset algorithm model can be understood as an algorithm model for performing axis processing on text data in the setting of the segment to be live broadcast, and the algorithm model is not limited in this embodiment.
In practical application, the playing engine can perform axis processing on the content set in the segment to be live through text data through a data processing and algorithm model, determine the matching relationship between the material content set in the segment to be live and the live broadcast text, change the live broadcast text content into multi-mode live broadcast content through the matching relationship, and generate the target live broadcast content of a virtual character deduced in a live broadcast room, wherein the axis processing can be understood as performing marking processing on the live broadcast text and the segment to be live. For example, if the live text is "white of the neckline of the clothes", the "detailed picture or video of the neckline of the clothes" can be determined from the setting of the segment to be live, and the displaying process of the four words of "neckline of the clothes" in the live text is matched with the "detailed picture or video of the neckline of the clothes", that is, the displaying can be started from the word of "clothes", and the displaying is ended from the word of "neckline", so that the target live content of the virtual character is generated. It should be noted that a specific generation manner of the target live content is not limited in this embodiment, and any manner of generating the target live content may be adopted.
In the live content generation method provided by the embodiment of the present specification, the target live content is generated by performing a spooling process on the settings of the segments to be live, and the play engine can control the live broadcast of the target live content by the virtual character in the live broadcast room, so as to realize the function of uninterrupted live broadcast of the virtual character.
Step 214: and sending the target live broadcast content to a client.
In practical application, audiences watch live virtual characters in a live broadcast room, and the playing engine can send generated target live broadcast content to the client, so that the audiences can watch the live broadcast content of the virtual characters in the live broadcast room at any time through the client.
To sum up, in the live content generation method provided in the embodiment of the present specification, a target event occurring in a live broadcast room is acquired, a live broadcast text corresponding to the target event is acquired, a scene structure is performed on the live broadcast text, and a to-be-live-broadcast segment setting is determined; and setting the segments to be live broadcast according to the event type of the target event, and performing sequence adjustment and inter-cut according to the queue sequence and the priority characteristics so as to realize the capabilities of breakpoint continuous play, timing/sequence play and real-time inter-cut of the virtual character in a live broadcast room.
The live content generation method provided in this specification is further described below with reference to fig. 5, taking an application of the live content generation method in virtual live broadcasting as an example. Fig. 5 is a schematic processing procedure diagram illustrating that a live content generation method provided in an embodiment of the present specification is applied to a virtual live control system.
Fig. 5 includes a director system, an engine core (playback engine core), a scenario processor, and an instruction processor, where not only the director system provides scenarios for scenario construction (producers) in the engine core, but also the event rule system and the algorithm decision system participate in providing scenario capabilities for producers. The engine kernel (playing engine kernel) can support interaction, interrupt the content being explained in the live broadcast room and insert the content needing to respond in time, so that a multi-producer playing engine with multiple priorities is designed in the engine kernel, the playing content of the script is processed, and the event system and the decision system drive the content change of the live broadcast room according to the behavior and the event of a user in the live broadcast room; the production content priority of each producer is also different, so a double-queue buffer is designed, and one high-priority queue and one common queue are designed. Based on different priorities, the contents are put into different queues, and when the contents are dequeued, the contents are dequeued from a high-priority queue until the high-priority queue has no data, and the contents are dequeued from a common queue, wherein inter-cut time and freshness selection of the same priority condition are also considered in the process. Under some scenes, the scene content needs to be completely expressed, so that the addressing enqueue capacity is designed, the inserted queue is judged according to the priority, then the queue is addressed downwards, the last bit of the last scene is found, the last bit is inserted after the last bit is found, and when the current scene is dequeued, the current scene needs to be completely played and can be inserted.
In addition, the play engine design also considers the continuous play after the breakpoint reconnection and the provision of play messages, for example, the play engine is used for judging the current live broadcast content and the live broadcast state by an anchor assistant and a decision system, and in conclusion, the play engine is a driving engine for addressing and queuing by a plurality of producers based on priority, timeliness and integrity, and consumers select and queue to help the virtual anchor to carry out live broadcast and interaction like a real person.
In the live content generation method provided by the embodiment of the specification, the playing engine can support the uninterrupted live broadcast of the virtual character, and for the event needing to be responded timely, the virtual character can respond in the live broadcast room timely and interact with the audience in real time, so that the user experience of watching the live broadcast by the user is enhanced.
Corresponding to the above method embodiment, the present specification further provides an embodiment of a live content generating apparatus, and fig. 6 illustrates a schematic structural diagram of a live content generating apparatus provided in an embodiment of the present specification. As shown in fig. 6, the apparatus is applied to a virtual live broadcast control system, and includes:
a live broadcasting module 602 configured to perform live broadcasting in a live broadcasting room by using a virtual character;
an event obtaining module 604 configured to obtain a target event occurring in the live broadcast room;
a text obtaining module 606 configured to obtain a corresponding live text based on the target event;
a scene processing module 608 configured to perform scene construction processing on the live broadcast text based on a scene protocol processing rule, and determine a to-be-live-broadcast segment setting corresponding to the live broadcast text;
a segment placement module 610 configured to place the segment setting to be live broadcast at a target playing position in the live broadcast waiting queue based on an event type of the target event;
a content generating module 612, configured to generate, in response to the target playing position where live broadcasting is performed, target live broadcasting content of the virtual character according to the to-be-live-broadcast segment setting;
a content sending module 614 configured to send the target live content to a client.
Optionally, the segment placement module 610 is further configured to:
under the condition that the event type of the target event is determined to be a first event type, determining a live broadcast waiting queue corresponding to the segment to be live broadcast in a queue buffer area as an inter-cut waiting queue;
and setting the segment to be live broadcast at a target playing position of the inter-cut waiting queue.
Optionally, the segment placement module 610 is further configured to:
under the condition that the event type of the target event is determined to be a second event type, determining a live broadcast waiting queue corresponding to the to-be-live-broadcast segment in a queue buffer area as a sequential live broadcast waiting queue;
and placing the segment to be live broadcast at the target playing position of the sequential live broadcast waiting queue.
Optionally, the text obtaining module 606 is further configured to:
and acquiring the inter-cut text corresponding to the inter-cut event in a preset text database based on the inter-cut event.
Optionally, the scene processing module 608 is further configured to:
determining inter-cut scene data matched with the inter-cut text from a preset scene database based on the inter-cut text;
and constructing and processing the inter-cut scene data and the inter-cut text based on the scene protocol processing rule, and determining the setting of the segment to be live-cut corresponding to the inter-cut text.
Optionally, the segment placement module 610 is further configured to:
judging whether the priority set by the segment to be live broadcast is a target priority,
if yes, placing the to-be-live-broadcast clip at a first target playing position of the inter-broadcast waiting queue;
if not, the to-be-live-broadcast clip is placed at a second target playing position of the inter-broadcast waiting queue.
Optionally, the text obtaining module 606 is further configured to:
and acquiring a sequential playing text which is played sequentially in a playing sequence preset in a preset text database based on the sequential playing event.
Optionally, the scene processing module 608 is further configured to:
determining sequential playing scene data matched with the sequential playing text from a preset scene database based on the sequential playing text;
and constructing and processing the sequential playing scene data and the sequential playing text based on the scene protocol processing rule, and determining the setting of the segment to be live broadcast corresponding to the sequential playing text.
Optionally, the content generating module 612 is further configured to:
performing axis-taking processing on the setting of the segment to be live broadcast and the live broadcast text based on a preset algorithm model, and determining a matching relation between the setting of the segment to be live broadcast and the live broadcast text;
and generating target live broadcast content of the virtual character based on the matching relation.
The live broadcast content generation device provided in this specification embodiment performs scene construction processing on a live broadcast text acquired according to a target event according to the target event that is determined to occur in a live broadcast room, generates a corresponding to-be-live broadcast clip setting, and places the to-be-live broadcast clip setting in different live broadcast waiting queues for waiting to be played according to the event type of the target event, so as to solve the problem that when the target event is an inter-broadcast event, a response can be timely made to the inter-broadcast event, a to-be-live broadcast clip setting corresponding to the inter-broadcast event is generated, and inter-broadcast play is performed in the live broadcast room, that is, a virtual character can timely feed back the target event in the live broadcast room, so that interactivity between the virtual character and audiences can be enhanced, and viewing experience of a user can be improved.
The above is a schematic scheme of a live content generating apparatus of the present embodiment. It should be noted that the technical solution of the live content generating apparatus and the technical solution of the live content generating method belong to the same concept, and for details that are not described in detail in the technical solution of the live content generating apparatus, reference may be made to the description of the technical solution of the live content generating method.
FIG. 7 illustrates a block diagram of a computing device 700 provided in accordance with one embodiment of the present description. The components of the computing device 700 include, but are not limited to, memory 710 and a processor 720. Processor 720 is coupled to memory 710 via bus 730, and database 750 is used to store data.
Computing device 700 also includes access device 740, access device 740 enabling computing device 700 to communicate via one or more networks 760. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 740 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 700, as well as other components not shown in FIG. 7, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 7 is for purposes of example only and is not limiting as to the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 700 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 700 may also be a mobile or stationary server.
Wherein the processor 720 is configured to execute computer-executable instructions that, when executed by the processor, implement the steps of the live content generation method described above.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the live content generation method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can all be referred to in the description of the technical solution of the live content generation method.
An embodiment of the present specification further provides a computer-readable storage medium storing computer-executable instructions, which when executed by a processor, implement the steps of the live content generation method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as that of the above live content generation method, and for details that are not described in detail in the technical solution of the storage medium, reference may be made to the description of the technical solution of the above live content generation method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (12)

1. A live broadcast content generation method is applied to a virtual live broadcast control system and comprises the following steps:
live broadcasting is carried out in a live broadcasting room by utilizing the virtual character;
acquiring a target event occurring in the live broadcast room;
acquiring a corresponding live broadcast text based on the target event;
scene construction processing is carried out on the live broadcast text based on a scene protocol processing rule, and the setting of a segment to be live broadcast corresponding to the live broadcast text is determined;
based on the event type of the target event, setting and placing the segment to be live broadcast at a target playing position in the live broadcast waiting queue;
responding to the target playing position where live broadcasting is carried out, and generating target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting;
and sending the target live broadcast content to a client.
2. The live content generation method according to claim 1, wherein the setting and placing the segment to be live at the target play position in the live waiting queue based on the event type of the target event includes:
under the condition that the event type of the target event is determined to be a first event type, determining a live broadcast waiting queue corresponding to the segment to be live broadcast in a queue buffer area as an inter-cut waiting queue;
and setting the segment to be live broadcast at a target playing position of the inter-cut waiting queue.
3. The live content generation method according to claim 1, wherein the setting and placing the segment to be live at the target play position in the live waiting queue based on the event type of the target event includes:
under the condition that the event type of the target event is determined to be a second event type, determining a live broadcast waiting queue corresponding to the to-be-live-broadcast segment in a queue buffer area as a sequential live broadcast waiting queue;
and placing the segment to be live broadcast at the target playing position of the sequential live broadcast waiting queue.
4. The live content generation method according to claim 2, wherein the target event is a spot event,
correspondingly, the obtaining of the corresponding live text based on the target event includes:
and acquiring the inter-cut text corresponding to the inter-cut event in a preset text database based on the inter-cut event.
5. The live content generation method according to claim 4, wherein the performing scene construction processing on the live text based on a scene protocol processing rule to determine a to-be-live clip setting corresponding to the live text includes:
determining inter-cut scene data matched with the inter-cut text from a preset scene database based on the inter-cut text;
and constructing and processing the inter-cut scene data and the inter-cut text based on the scene protocol processing rule, and determining the setting of the segment to be live-cut corresponding to the inter-cut text.
6. The live content generation method according to claim 5, wherein the setting and placing the segment to be live at the target play position of the inter-cut waiting queue includes:
judging whether the priority set by the segment to be live broadcast is a target priority,
if yes, placing the to-be-live-broadcast clip at a first target playing position of the inter-broadcast waiting queue;
if not, the to-be-live-broadcast clip is placed at a second target playing position of the inter-broadcast waiting queue.
7. The live content generation method according to claim 3, wherein the target event is a sequential play event,
correspondingly, the obtaining of the corresponding live text based on the target event includes:
and acquiring a sequential playing text which is played sequentially in a playing sequence preset in a preset text database based on the sequential playing event.
8. The live content generation method according to claim 7, wherein the performing scene construction processing on the live text based on a scene protocol processing rule to determine a to-be-live clip setting corresponding to the live text includes:
determining sequential playing scene data matched with the sequential playing text from a preset scene database based on the sequential playing text;
and constructing and processing the sequential playing scene data and the sequential playing text based on the scene protocol processing rule, and determining the setting of the segment to be live broadcast corresponding to the sequential playing text.
9. The live content generation method of claim 1, wherein generating the target live content of the virtual character according to the to-be-live-segment setting comprises:
performing axis-taking processing on the setting of the segment to be live broadcast and the live broadcast text based on a preset algorithm model, and determining a matching relation between the setting of the segment to be live broadcast and the live broadcast text;
and generating target live broadcast content of the virtual character based on the matching relation.
10. A live content generation device is applied to a virtual live control system and comprises the following components:
the live broadcasting module is configured to carry out live broadcasting in a live broadcasting room by utilizing the virtual character;
the event acquisition module is configured to acquire a target event occurring in the live broadcast room;
the text acquisition module is configured to acquire corresponding live text based on the target event;
the scene processing module is configured to perform scene construction processing on the live broadcast text based on a scene protocol processing rule and determine a to-be-live-broadcast segment setting corresponding to the live broadcast text;
a segment placement module configured to place the segment to be live-broadcasted at a target play position in the live-broadcasted waiting queue based on an event type of the target event;
the content generation module is configured to respond to the target playing position where live broadcasting is carried out, and generate target live broadcasting content of the virtual character according to the setting of the segment to be live broadcasting;
a content sending module configured to send the target live content to a client.
11. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor, implement the steps of the live content generation method of any one of claims 1 to 9.
12. A computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the steps of the live content generation method of any one of claims 1 to 9.
CN202111386008.2A 2021-11-22 2021-11-22 Live content generation method and device Pending CN113825031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111386008.2A CN113825031A (en) 2021-11-22 2021-11-22 Live content generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111386008.2A CN113825031A (en) 2021-11-22 2021-11-22 Live content generation method and device

Publications (1)

Publication Number Publication Date
CN113825031A true CN113825031A (en) 2021-12-21

Family

ID=78918088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111386008.2A Pending CN113825031A (en) 2021-11-22 2021-11-22 Live content generation method and device

Country Status (1)

Country Link
CN (1) CN113825031A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device
CN114125490A (en) * 2022-01-19 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast method and device
CN114125569A (en) * 2022-01-27 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast processing method and device
CN114125491A (en) * 2022-01-20 2022-03-01 阿里巴巴(中国)有限公司 Virtual live broadcast control method and device
CN114157897A (en) * 2022-01-25 2022-03-08 阿里巴巴(中国)有限公司 Virtual live broadcast control method and device
CN114363652A (en) * 2022-01-04 2022-04-15 阿里巴巴(中国)有限公司 Video live broadcast method, system and computer storage medium
CN114979682A (en) * 2022-04-19 2022-08-30 阿里巴巴(中国)有限公司 Multi-anchor virtual live broadcasting method and device
CN115567732A (en) * 2022-11-14 2023-01-03 北京鲜衣怒马文化传媒有限公司 Virtual live broadcast interaction method and device
CN115767194A (en) * 2022-11-15 2023-03-07 魔珐(上海)信息科技有限公司 Live broadcasting method and device of virtual digital object and terminal
WO2023159897A1 (en) * 2022-02-23 2023-08-31 华为云计算技术有限公司 Video generation method and apparatus
CN116996703A (en) * 2023-08-23 2023-11-03 中科智宏(北京)科技有限公司 Digital live broadcast interaction method, system, equipment and storage medium
CN117336520A (en) * 2023-12-01 2024-01-02 江西拓世智能科技股份有限公司 Live broadcast information processing method and processing device based on intelligent digital person
CN117395449A (en) * 2023-12-08 2024-01-12 江西拓世智能科技股份有限公司 Tolerance dissimilarisation processing method and processing device for AI digital live broadcast

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277912A (en) * 2020-02-17 2020-06-12 百度在线网络技术(北京)有限公司 Image processing method and device and electronic equipment
CN112333179A (en) * 2020-10-30 2021-02-05 腾讯科技(深圳)有限公司 Live broadcast method, device and equipment of virtual video and readable storage medium
CN112637625A (en) * 2020-12-17 2021-04-09 江苏遨信科技有限公司 Virtual real person anchor program and question-answer interaction method and system
CN113157366A (en) * 2021-04-01 2021-07-23 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and storage medium
CN113599826A (en) * 2021-08-16 2021-11-05 北京字跳网络技术有限公司 Virtual character display method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277912A (en) * 2020-02-17 2020-06-12 百度在线网络技术(北京)有限公司 Image processing method and device and electronic equipment
CN112333179A (en) * 2020-10-30 2021-02-05 腾讯科技(深圳)有限公司 Live broadcast method, device and equipment of virtual video and readable storage medium
CN112637625A (en) * 2020-12-17 2021-04-09 江苏遨信科技有限公司 Virtual real person anchor program and question-answer interaction method and system
CN113157366A (en) * 2021-04-01 2021-07-23 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and storage medium
CN113599826A (en) * 2021-08-16 2021-11-05 北京字跳网络技术有限公司 Virtual character display method and device, computer equipment and storage medium

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363652A (en) * 2022-01-04 2022-04-15 阿里巴巴(中国)有限公司 Video live broadcast method, system and computer storage medium
CN114125490B (en) * 2022-01-19 2023-09-26 阿里巴巴(中国)有限公司 Live broadcast playing method and device
CN114125490A (en) * 2022-01-19 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast method and device
WO2023138640A1 (en) * 2022-01-20 2023-07-27 阿里巴巴(中国)有限公司 Virtual livestream control method and device
CN114125491A (en) * 2022-01-20 2022-03-01 阿里巴巴(中国)有限公司 Virtual live broadcast control method and device
CN114125492B (en) * 2022-01-24 2022-07-15 阿里巴巴(中国)有限公司 Live content generation method and device
CN114125492A (en) * 2022-01-24 2022-03-01 阿里巴巴(中国)有限公司 Live content generation method and device
WO2023143133A1 (en) * 2022-01-25 2023-08-03 阿里巴巴(中国)有限公司 Virtual live broadcast control method and apparatus
CN114157897A (en) * 2022-01-25 2022-03-08 阿里巴巴(中国)有限公司 Virtual live broadcast control method and device
WO2023143134A1 (en) * 2022-01-27 2023-08-03 阿里巴巴(中国)有限公司 Live broadcast processing method and apparatus
CN114125569A (en) * 2022-01-27 2022-03-01 阿里巴巴(中国)有限公司 Live broadcast processing method and device
WO2023159897A1 (en) * 2022-02-23 2023-08-31 华为云计算技术有限公司 Video generation method and apparatus
CN114979682A (en) * 2022-04-19 2022-08-30 阿里巴巴(中国)有限公司 Multi-anchor virtual live broadcasting method and device
CN114979682B (en) * 2022-04-19 2023-10-13 阿里巴巴(中国)有限公司 Method and device for virtual live broadcasting of multicast
CN115567732B (en) * 2022-11-14 2023-03-14 北京鲜衣怒马文化传媒有限公司 Virtual live broadcast interaction method and device
CN115567732A (en) * 2022-11-14 2023-01-03 北京鲜衣怒马文化传媒有限公司 Virtual live broadcast interaction method and device
CN115767194A (en) * 2022-11-15 2023-03-07 魔珐(上海)信息科技有限公司 Live broadcasting method and device of virtual digital object and terminal
CN116996703A (en) * 2023-08-23 2023-11-03 中科智宏(北京)科技有限公司 Digital live broadcast interaction method, system, equipment and storage medium
CN117336520A (en) * 2023-12-01 2024-01-02 江西拓世智能科技股份有限公司 Live broadcast information processing method and processing device based on intelligent digital person
CN117336520B (en) * 2023-12-01 2024-04-26 江西拓世智能科技股份有限公司 Live broadcast information processing method and processing device based on intelligent digital person
CN117395449A (en) * 2023-12-08 2024-01-12 江西拓世智能科技股份有限公司 Tolerance dissimilarisation processing method and processing device for AI digital live broadcast
CN117395449B (en) * 2023-12-08 2024-04-26 江西拓世智能科技股份有限公司 Tolerance dissimilarisation processing method and processing device for AI digital live broadcast

Similar Documents

Publication Publication Date Title
CN113825031A (en) Live content generation method and device
CN110570698B (en) Online teaching control method and device, storage medium and terminal
US10210002B2 (en) Method and apparatus of processing expression information in instant communication
US20200312327A1 (en) Method and system for processing comment information
US11882319B2 (en) Virtual live video streaming method and apparatus, device, and readable storage medium
US9686329B2 (en) Method and apparatus for displaying webcast rooms
US11386931B2 (en) Methods and systems for altering video clip objects
CN112087655B (en) Method and device for presenting virtual gift and electronic equipment
WO2014183427A1 (en) Method and apparatus for displaying webcast rooms
CN112087669B (en) Method and device for presenting virtual gift and electronic equipment
CN111294606B (en) Live broadcast processing method and device, live broadcast client and medium
US20230285854A1 (en) Live video-based interaction method and apparatus, device and storage medium
CN114025186A (en) Virtual voice interaction method and device in live broadcast room and computer equipment
CN113727130A (en) Message prompting method, system and device for live broadcast room and computer equipment
US11601690B2 (en) Method and apparatus for live streaming, server, system and storage medium
CN112637625A (en) Virtual real person anchor program and question-answer interaction method and system
CN114302153B (en) Video playing method and device
CN114422468A (en) Message processing method, device, terminal and storage medium
US20230214084A1 (en) Method for displaying interface, device and storage medium
CN111773661A (en) System, method and device for team formation game based on live broadcast interface
CN114449301B (en) Item sending method, item sending device, electronic equipment and computer-readable storage medium
CN117793433A (en) Video interaction method, device, electronic equipment and computer readable storage medium
CN109408757A (en) Question and answer content share method, device, terminal device and computer storage medium
CN113301362B (en) Video element display method and device
CN117319340A (en) Voice message playing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220310

Address after: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba Dharma Academy

Applicant after: (Hangzhou) Technology Co.,Ltd.

Applicant after: Aliyun Computing Co.,Ltd.

Address before: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20211221

RJ01 Rejection of invention patent application after publication