CN117641072A - Method, device, equipment and storage medium for generating content - Google Patents

Method, device, equipment and storage medium for generating content Download PDF

Info

Publication number
CN117641072A
CN117641072A CN202311651164.6A CN202311651164A CN117641072A CN 117641072 A CN117641072 A CN 117641072A CN 202311651164 A CN202311651164 A CN 202311651164A CN 117641072 A CN117641072 A CN 117641072A
Authority
CN
China
Prior art keywords
video
content
script
video content
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311651164.6A
Other languages
Chinese (zh)
Inventor
韩天磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311651164.6A priority Critical patent/CN117641072A/en
Publication of CN117641072A publication Critical patent/CN117641072A/en
Pending legal-status Critical Current

Links

Abstract

Embodiments of the present disclosure relate to a method, apparatus, device, and storage medium for content generation. The method proposed herein comprises: determining a video script to be applied, the video script being generated for analysis of a set of published video content, the video script indicating at least an event type matching a first time period of the video to be generated; determining an interaction event matching the event type from the interaction content associated with the user; and generating video content corresponding to the video script based on the interaction event. Based on the above mode, the embodiment of the disclosure can collect the interaction event in the interaction content of the user and generate the video content by using the video script, so that the interaction event of the user can be conveniently and accurately recorded, and the content generation efficiency is improved.

Description

Method, device, equipment and storage medium for generating content
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, more particularly, relate to a method, apparatus, device, and computer-readable storage medium for content generation.
Background
With the development of computer technology and internet technology, people are allowed to perform online interaction by using the internet and streaming media technology. For example, users may utilize the internet to interact, interact with other users in a virtual environment to meet their social, entertainment, content sharing, and so forth.
Generally, in order to achieve a wider range of interaction and content sharing, users can record the interaction process. For example, a user may take a photograph or record of the process of interaction to share the content of the interaction in the form of images, videos, etc. Thus, how to better complete content collection, and how to improve content collection quality, is currently of great concern and urgent need.
Disclosure of Invention
In a first aspect of the present disclosure, a method of content generation is provided. The method comprises the following steps: determining a video script to be applied, the video script being generated for analysis of a set of published video content, the video script indicating at least an event type matching a first time period of the video to be generated; determining an interaction event matching the event type from the interaction content associated with the user; and generating video content corresponding to the video script based on the interaction event.
In a second aspect of the present disclosure, an apparatus for content generation is provided. The device comprises: a script determination module configured to determine a video script to be applied, the video script being generated for analysis of a set of published video content, the video script indicating at least a type of event matching a first time period of the video to be generated; an event determination module configured to determine an interaction event matching the event type from the interaction content associated with the user; and a content generation module configured to generate video content corresponding to the video script based on the interaction event.
In a third aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to implement the method of the first aspect.
It should be understood that what is described in this section of the disclosure is not intended to limit key features or essential features of the embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments in accordance with the present disclosure may be implemented;
FIG. 2 illustrates a flow chart of a process of example script generation, according to some embodiments of the present disclosure;
FIG. 3 illustrates an example of a video script in accordance with some embodiments of the present disclosure;
fig. 4A and 4B illustrate a flow chart of a video content generation process, respectively, according to some embodiments of the present disclosure;
FIG. 5 illustrates a flowchart of an example process of content generation, according to some embodiments of the present disclosure;
FIG. 6 shows a schematic block diagram of an apparatus for content generation according to some embodiments of the present disclosure; and
fig. 7 illustrates a block diagram of an electronic device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that any section/subsection headings provided herein are not limiting. Various embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, the embodiments described in any section/subsection may be combined in any manner with any other embodiment described in the same section/subsection and/or in a different section/subsection.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below. The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
Embodiments of the present disclosure may relate to user data, the acquisition and/or use of data, and the like. These aspects all follow corresponding legal and related regulations. In embodiments of the present disclosure, all data collection, acquisition, processing, forwarding, use, etc. is performed with knowledge and confirmation by the user. Accordingly, in implementing the embodiments of the present disclosure, the user should be informed of the type of data or information, the range of use, the use scenario, etc. that may be involved and obtain the authorization of the user in an appropriate manner according to the relevant laws and regulations. The particular manner of notification and/or authorization may vary depending on the actual situation and application scenario, and the scope of the present disclosure is not limited in this respect.
In the present description and embodiments, if the personal information processing is concerned, the processing is performed on the premise of having a validity base (for example, obtaining agreement of the personal information body, or being necessary for executing a contract, etc.), and the processing is performed only within a prescribed or contracted range. The user refuses to process the personal information except the necessary information of the basic function, and the basic function is not influenced by the user.
As mentioned briefly above, to achieve a wider range of interactions, content sharing, users may record the interaction process. Further, to enhance the value of content, recorded content is typically generated based on specific interaction events in the user's interaction content.
In some schemes, the complete interactive content is provided for the user by means of recording and playback, so that the user can record the video content of a specific interactive event by means of image capturing and video capturing, and then edit the video content by a video editor to form the complete video content. Accordingly, users may utilize video content to share their interaction process (or more specifically, specific interaction events of the user).
However, in these schemes, the editing party needs to edit based on the complete interactive content video, so that it needs to screen and edit multiple times to generate the final video content, and the generation efficiency of the video content is low.
The embodiment of the disclosure provides a scheme for generating content. According to this approach, a video script to be applied may be determined, the video script being generated for analysis of a set of published video content, the video script indicating at least a type of event matching a first time period of the video to be generated; determining an interaction event matching the event type from the interaction content associated with the user; and generating video content corresponding to the video script based on the interaction event.
By means of the method, the embodiment of the disclosure can collect the interaction event in the interaction content of the user and generate the video content by using the video script, so that the interaction event of the user can be conveniently and accurately recorded, and the content generation efficiency is improved.
Various example implementations of the scheme are described in further detail below in conjunction with the accompanying drawings.
Example Environment
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. As shown in fig. 1, an example environment 100 may include an electronic device 110.
In this example environment 100, an application 120 for providing content video generation functionality may be running in an electronic device 110. The application 120 may be for generating video content 140 based on the acquired video material. For example, the application 120 may be used by an editor associated with the application 120 to generate the video content 140 according to the instructions of the user 130 based on video material provided by the user 130 (e.g., interactive content of the user 130 in a virtual scene). In some embodiments, the application 120 may also be an application that provides for interactions between users, for example, the application 120 may provide a virtual scene for the user 130 such that the user 130 may interact with other users in the virtual scene. Accordingly, the application 120 may obtain the video material based on the interactive contents of the user 130 and other users, and complete the generation of the video content. Thus, examples of applications 120 may include, but are not limited to: an online gaming application.
Further, the application 120 may generate video content 140 based on the acquired video material according to the instructions of the user 130. For example, the interactive content associated with the user 130 is uploaded and consolidated by an editor (e.g., a service provider providing the application 120) according to the user's 130 instructions. Further, the electronic device 110 may determine a plurality of video segments (e.g., according to a plurality of time intervals indicated by the editor) from the interactive content of the user 130 with other users, and obtain the video content 140 based on the video segments (e.g., stitching). In some embodiments, the editor may be a maintainer of the application 120 (e.g., when the application 120 is provided by an interactive platform, the editor may be a video editing service provider indicated by the interactive platform). The editor may edit the video material uploaded by the user 130 to generate corresponding video content 140 for use by the user 130. For example, the user 130 may share the video 140 to other users for social purposes.
In general, the user 130 may interact with the application 120 via the electronic device 110 and/or its attached device. For example, interactions are performed in a virtual scene provided by the application 120, instructing the application 120 to generate the video content 140.
In general, the electronic device 110 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network, and basic cloud computing services such as big data and an artificial intelligence platform. E.g., computing systems/servers, such as mainframes, edge computing nodes, computing devices in a cloud environment, etc.
In some embodiments, the computing capabilities of the electronic device 110 may also be any type of mobile terminal, fixed terminal, or portable terminal, as desired, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, palmtop computer, portable gaming terminal, VR/AR device, personal communication system (Personal Communication System, PCS) device, personal navigation device, personal digital assistant (Personal Digital Assistant, PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, electronic device 110 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.).
In addition, if the electronic device 110 is embodied as a terminal device used by an editor, the electronic device 110 may also communicate with other electronic devices, such as servers, for example, to obtain online support for the content of the application 120 provided by the server, and to provide the generated video content 140 to the server for sharing, and so on.
In this case, the electronic device 110 embodied as a terminal device may establish a communication connection between the servers. The communication connection may be established by wired means or wireless means. The communication connection may include, but is not limited to, a bluetooth connection, a mobile network connection, a universal serial bus (Universal Serial Bus, USB for short), a wireless fidelity (Wireless Fidelity, wiFi for short), etc., as embodiments of the disclosure are not limited in this respect.
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure.
Some example embodiments of the present disclosure will be described below with continued reference to the accompanying drawings.
As explained above, embodiments of the present disclosure may utilize video scripts to generate video content. For example, specific interaction events in the interaction content are marked and collected based on the video scripts, and videos are composed and generated based on the determined interaction events. For easy understanding, the method of generating the video script is described first, and then the content generation process implemented by the video script is described.
Example script generation
A flowchart of an example script generation process 200 according to some embodiments of the present disclosure will be described below in connection with fig. 2. For ease of understanding, the description will be presented in conjunction with the environment 100 shown in fig. 1. For example, process 200 may be performed by electronic device 110, which is illustrated as a server.
At block 210, the electronic device 110 obtains published video content by an analysis module.
In an embodiment of the present disclosure, the analysis module may be configured to obtain the published video content, or collect the published video content. For example, other users may generate, record, video content for interaction with the user 130, in which portions of the interaction of other users with the user 130 are recorded. In some embodiments, the analysis module may, for example, periodically capture video content in the video platform published by, for example, user 130, other users, and editors. In some embodiments, the analysis module may be configured in the electronic device 110 or may be configured independently of the electronic device 110. In the case where the analysis module is configured independently of the electronic device 110, the electronic device 110 may acquire the published video content included in the analysis module by communicating with the analysis module. In some embodiments, the published video content may be, for example, high-heat content in a video website, a game scenario, a game play, and the like.
At block 220, the electronic device 110 determines a set of target events from the published video content.
In embodiments of the present disclosure, the electronic device 110 may determine a set of target events from the published video content described above. Typically, such target events are certain representative events, for example, when the interactive content user 130 has completed a particular interactive event. The interactive event may be a specific event in the scene, for example in a game scene, which may for example trigger the content of some preset rewards. For example, user 130 defeats events that assist in defeating other users. Such as events in which the user 130 successively defeats multiple other users, etc. In general, the targeted event may be part of, or otherwise associated with, the core play of the interactive content. For example, in a challenge-type game, such a target event may be a user defeating, helping defeat other users. In addition, such targeted events typically produce some positive stimulus for user 130 interaction. For example, the user 130 has collected a preset special prop (or more rare prop), has won in gaming, competing with other users, released a preset specific skill, and so forth. In some embodiments, these target events directly associated with the actions taken by the user 130 may also be referred to as "highlight events" of the user (e.g., actions that the user 130 wins in antagonism). In some embodiments, when the target event is triggered, the electronic device 110 may provide some content, such as identification information, special sound effects, etc., to prompt the user 130 while exciting it.
In some embodiments, other content added by the user 130, editors, for example, in the video may also be included in the video script, such as inserted audio effects (e.g., background audio effects of "sound" in different virtual scenes), video effects (e.g., additionally added video special effects, image styles, etc.), transition effects (e.g., switching special effects of picture content), and so forth. For example, other content may be at XX play time of video, with "A music special effects" inserted. Electronic device 110 may parse the published video content to determine a set of target events included in the published video content (e.g., all target events included in the published video content). In some embodiments, the electronic device 110 may determine a set of target events based on at least one of the visual information, the audio information, and the text information of the published content. Specifically, the electronic device 110 may identify whether the target event is included in the published video content by, for example, parsing the content included in the picture information in the video (e.g., whether the bonus icon of the target event is included, the hint information, etc.). The electronic device 110 may also determine whether a target event exists by parsing the audio information. For another example, whether the target event is present is determined by detecting whether a special sound effect (e.g., an excitation sound effect, such as broadcasting specific content of the target event, such as the user 130 winning in antagonism with other users) corresponding to the target event is present. The electronic device 110 may also determine whether the target event exists by parsing the text information. For example, by parsing the video content for text cues associated with the target event (e.g., text cues associated with the video content, text cues for interactive content, transcript information associated with the game, etc.). Thus, the video content can be parsed by at least one of the picture information, the audio information, and the text information, and the target event included in the published video can be mined.
In some embodiments, electronic device 110 may also determine a set of target events from the published video content based on the popularity information of the published video content, wherein the popularity of the portion of the video content corresponding to the set of target events is greater than a threshold.
Specifically, the electronic device 110 may determine the popularity information of each time period of the published content based on, for example, the interaction of the published video content (e.g., browsing distribution, praying distribution, sharing distribution, etc.). Further, the electronic device 110 may extract portions of video content having a heat greater than a threshold from the published video content to determine a corresponding set of target events.
At block 230, the electronic device 110 generates a video script corresponding to the published video content based on the set of target events.
In some embodiments, electronic device 110 may construct a video script corresponding to published video content based on time information and type information for a set of target events.
In embodiments of the present disclosure, the electronic device 110 may record type information and time information of each target event. For example, the time information indicates a period of time for which the corresponding event is in the published video content, or, in other words, an insertion time, an end time of the corresponding target event in the published video content. The type information indicates an event type of the corresponding event. In some embodiments, the type information may be partitioned based on the content of the target event, e.g., may be user 130 achieving a type target event, user 130 achieving a type B target event, and so on.
Further, the electronic device 110 may construct a video script corresponding to the published video content based on the target event time information and the type information included in the published video content. Taking one target event as an example, the electronic device 110 may construct a video script corresponding to the published video content with time information and type information of the target event. For example, the video script indicates that the user is added to achieve the type A target event in the X1-X2 time period and the user is added to achieve the type B target event in the X3-X4 time period.
In the embodiment of the present disclosure, the video script indicates at least an event type in which a period included in a video to be generated (which is described as a first period for convenience of description) matches. For example, an event type that indicates a match for the first time period X1-X2 (e.g., the user achieves a type A target event to make an association of the target event).
In some embodiments, the video script may also indicate a time period corresponding to other content in the published video content (e.g., at least one of the audio effects, video effects, and transition effects described above).
In some embodiments, the video script may also indicate an audio effect corresponding to a time period of the video to be generated (which is described as a second time period for convenience of description). That is, in the video script, the addition of the audio effect in the second period of time may be instructed so that the insertion of the audio effect other than "acoustic" may be completed with the video script.
In some embodiments, the video script may also indicate a video effect corresponding to a time period of the video to be generated (which is described as a third time period for convenience of description). That is, in the video script, it may be instructed to add a video effect in the third period of time, so that audio effect insertion different from the additional video effect can be completed with the video script.
In some embodiments, the video script may also indicate a transition effect corresponding to a time period of the video to be generated (which is described as a fourth time period for convenience of description). That is, in the video script, the addition of a transition effect in the fourth time period may be instructed to achieve the linking of different picture contents. Thus, editing of a plurality of video contents can be completed using the video script.
It should be appreciated that, in conjunction with the differences in the scenarios, the first time period, the second time period, the third time period, and the fourth time period may be the same or different. For example, the first time period indicates time periods X1-X2, X2-X3, the second time period indicates time X2-X3, and the third time period indicates time periods X1-X2, X2-X3.
Thus, the electronic device 110 may obtain the corresponding video script based on the analysis of the published video content after understanding its content architecture. The video script may then be utilized to generate video content that is similar in content architecture to the published video content. For example, there may be multiple "slots" in the video script (e.g., each operation may correspond to a target event or other content) such that the complete video content 140 may be made by inserting the content into the "slots". In some embodiments, other content may not be provided with "slots" independently, but may be allowed to be added by editing parties to target events, for example, by way of hints, etc.
In some embodiments, electronic device 110 may also add hints information in association with the video script for better subsequent use of the video script. For example, the hint information may include the virtual interactive scene to which the video script points, the interactive objects of the virtual interactive scene to which it applies, and so on. Thus, the electronic device 110 may subsequently select a more appropriate video script based on the actual circumstances of the interaction.
It should be appreciated that the electronic device 110 may pre-process the plurality of published video content to obtain a corresponding plurality of video scripts (or a set of video scripts) for use.
In some embodiments, electronic device 110 may utilize the object model to generate a video script, for example. Such a target model may be implemented based on an appropriate machine learning model, examples of which may include, but are not limited to, a language model, and the like.
In particular, the electronic device 110 may generate input information to the target model based on the extracted set of target events. For example, the electronic device 110 may generate guidance information (also referred to as guidance words) to the target model based on the time information of the set of target events in the video and the corresponding event description information.
In still other embodiments, the electronic device 110 may also provide descriptive information about virtual objects in the virtual scene, for example. Such virtual objects may include, for example, virtual characters that are manipulable in a virtual scene. For example, the electronic device 110 may generate guidance information to the target model based on the set of target events and the description information about the virtual object.
In some embodiments, such descriptive information may include, for example, scene descriptive information about the virtual scene. For example, the scene description information may include information about world view settings of a virtual scene, and the like.
In some embodiments, such descriptive information may also include, for example, character descriptive information about the virtual object (e.g., virtual character). For example, the character description information may include character setting information on a specific virtual character, and the like.
Further, the electronic device 110 may generate a video script corresponding to the published video content based on the output information of the object model.
For example, electronic device 110 may use the output information of the object model directly as a video script. Alternatively, the electronic device 110 may also edit or modify the output information to determine the final video script.
In some embodiments, the generated video script may also be requested by, for example, an editor, so that the editor may adjust the video script as desired, improving the quality of use of the video script.
It should be understood that the target event included in the video script at the same time point (or time period) may be one or more. In some embodiments, to enhance the presentation effect, some constraints may also be configured accordingly, e.g., the type a and type B target events are not presented simultaneously, to avoid negative effects on the presentation effect, e.g., image overlap, audio overlap.
Thus, the above information can be fit through learning understanding of published video content to become a story line (e.g., a "highlight" story line) such that video content made based on video scripts fits the highlight story line. And because the story line is generated based on the released video content, the heat condition of the released video can be utilized to adjust the emphasis of the video content and improve the quality of the video content, so that the whole high-light story line is more fit with the current user environment. In addition, the method can also automatically construct a story line and issue the story line on the line so as to observe the data index of the contribution, and can periodically and continuously automatically iterate the story line.
Example content Generation
The process by which the electronic device 110 generates the video content 140 using video scripts will be discussed in detail below. The following will explain in conjunction with fig. 3, 4A and 4B. Fig. 3 illustrates an example of a video script 300 according to some embodiments of the present disclosure. Fig. 4A and 4B illustrate flowcharts of video content generation processes 400A and 400B, respectively, according to some embodiments of the present disclosure. For ease of understanding, the description will be presented in conjunction with the environment 100 shown in fig. 1. For example, both process 400A and process 400B may be performed by electronic device 110, which is illustrated as a server.
In embodiments of the present disclosure, the electronic device 110 may determine a video script to apply. For example, from a set of video scripts generated for the analysis of a set of published video content described above. The video script indicates at least the event type that matches the first time period of the video to be generated. In some embodiments, a set of preset scripts may be obtained after utilizing a set of published video content in a manner such as that described above. Further, the electronic device 110 may receive a selection of a video script to apply from a set of preset scripts. For example, after electronic device 110 determines that video content 140 needs to be generated (e.g., user 130 issues a generation request to electronic device 110, an editor instructs electronic device 110 to generate video content 140), electronic device 110 may determine a video script to apply. In some embodiments, the electronic device 110 may determine the video script to apply based on content in the video content 140 that the user 130 desires to generate. For example, in the case where the desired content is "highlight moment" in which character B in virtual scene a is recorded, the electronic device 110 may determine the video script to be applied based on keywords such as "character highlight moment", "character highlight moment in virtual scene a", "character B highlight moment", and the like. For example, based on the above description, the video script to be applied is selected by the matching result of the keyword and the prompt information.
In some embodiments, the electronic device 110 may also provide the video script to be applied to the user 130 in advance according to the request of the user 130 (e.g., a set of video scripts may be provided according to the request of the user 130 and the video script to be applied may be determined according to the selection of the user 130). Thus, the user 130 may interact based on the content indicated by the video script to more efficiently and accurately provide video material.
In some embodiments, electronic device 110 may also update existing video scripts, for example, based on preset actions by user 130. In particular, electronic device 110 may, for example, provide user 130 with an initial video script, e.g., a default generated video script.
Further, the electronic device 110 may generate the guide information for updating the initial video script based on a preset operation of the user 130. For example, a user may enter a piece of text content to describe a particular need for updating the initial video script.
Further, the electronic device 110 may provide the guidance information and the initial video script to a model, for example, and may generate a video script to be applied by the model based on the guidance information and the initial video script. In this manner, embodiments of the present disclosure can support users to further customize or optimize video scripts so that the generated content more meets the expectations of the users.
In some embodiments, the electronic device 110 may also determine, after the user 130 completes the interaction, a corresponding video script to be applied based on the analysis result of the collected interactive content of the user 130 (for example, analyzing the interactive event included therein, for example, the interactive event may be determined based on the above-mentioned target event, or the interactive event corresponds to the above-mentioned target event).
In general, to enrich video content, published video content, generated video content 140 may include content related to multiple users (e.g., interactive content for user 130 may be included in video content 140, as well as interactive content for other users).
In some embodiments, the electronic device 110 may determine the video script to be applied based on the virtual object in the interactive content corresponding to the user 130. Specifically, the electronic device 110 may determine the video script to be applied based on the virtual object corresponding to the user 130 (i.e., the character B controlled by the user 130) after determining the interactive content of the user 130 (e.g., using the character B to interact with the characters of other users in the virtual scene a). Thus, the video script can be selected more pertinently. Further, the video content 140 generated by the electronic device 110 using the video script may be associated with the virtual character (e.g., character B). As described above, the electronic device 110 can generate the video content 140 for the character B by determining the video script that was generated for the interactive content for the character B. For example, in one particular scenario, a video script may be video content that is published, for example, by other users for interactive content that controls "role B". Thus, the video content 140 may be caused to focus the primary interactive content of the user 130.
Referring exemplarily to fig. 3, in fig. 3, a video script may include recommendation information 310 to recommend a virtual object (e.g., character B) to which the video script 300 is applicable. In some embodiments, recommendation information 320 and recommendation information 330 may also be included in the video script 300 to recommend other music that may be specific to audio using the recommendation information 320, and to use the recommendation information 330 video parameters (e.g., a picture ratio of 16:9). In some embodiments, the "slots" in the video script 300 may be marked based on the specific content of the "event type" in a visual style (e.g., recommended effects, clips, stickers in the video script 300) to facilitate understanding. For example, the presentation style of the slots 340 may be "skill A".
In the video script 300, at least the event type matching the first time period of the video to be generated is indicated. In some embodiments, other content of the above description (e.g., audio effects, video effects, transition effects, etc.) may also be indicated in the video script 300. For example, the event types included in the 3s-10s period are "preset specific skills are released" (e.g., "skill a") and other content "video special effects" (e.g., "video special effects a"). Also for example, the 10s-30s period includes "interaction event A", "interaction event B", and other content "video effect" (e.g., "video effect B") and "audio effect".
Further, the electronic device 110 determines an interaction event matching the event type from the interaction content associated with the user 130. The event type of the interaction event may correspond to the event type of the target event described above, thereby enabling the interaction event to correspond to the target event. In embodiments of the present disclosure, electronic device 110 may determine interactive content associated with user 130. In some embodiments, the interactive content may include interactive content of the user 130 in a virtual scene, such as, for example, game content. For example, in an example where the user 130 utilizes the character B in the virtual scene a, the interaction target may be that the point acquired by the character virtual scene a operated by the user reaches the target value. For example, the user 130 can fight against the character C operated by other users by operating the character B. After a game starts, if the points acquired by any character reach the target value, the game is considered to be completed, and accordingly, the game process between users can be called as game content. Accordingly, the electronic device 110 may determine an interaction event matching the event type based on the interaction content associated with the user (e.g., the user 130). In some embodiments, the interaction event may be one of the target events used in constructing the video script described above, e.g., the interaction event may be "releasing a preset specific skill". As another example, the interaction event may be defeating other users, defeating other users in succession (e.g., twice in succession, three times in succession, or defeating two different users in succession, defeating three different users in succession), and so forth.
Accordingly, in some embodiments, the electronic device 110 may also tag the target event, event type associated with the "user interaction behavior" when generating the video script, in order to determine the "interaction event" more efficiently. In some embodiments, the "interactive event" may be, for example, "highlight event", "highlight moment" as described above. The resulting video content 140 is thereby made to include, for example, highlight content associated with the office content. Alternatively, the end result is a "highlight video" that the video content may be of the user 130. Therefore, the user 130 can share the game situation by using the 'highlight video', and social experience of the user 130 is improved.
Further, the electronic device 110 generates video content corresponding to the video script based on the collected interaction event. In embodiments of the present disclosure, the electronic device 110 may utilize a video script (e.g., insert content into a "slot" in the video script) to form the complete video content 140 based on the event type of the collected interaction event, which correspondingly will be determined from the interaction content associated with the user (e.g., user 130) as matching the event type.
In some embodiments, the electronic device 110 generates a video content 140 that may also be presented to the editor. For example, after generating the video content 140, the electronic device 110 may provide the video content 140 to an editor for editing the video content 140 with the editor. Further, the electronic device 110 may provide the confirmed video content 140 or the edited video content 140 to the user 130 after the confirmation operation or the editing operation for the video content 140 based on the editor. Thus, the video content can be processed more finely by the editor to improve the quality of the video content.
In some embodiments, electronic device 110 may correspond to the same slot in the video script, and there may be multiple alternative interaction events (e.g., role B releases skill a multiple times). In this case, the electronic device 110 may determine the end-use interaction event by, for example, providing it to the editor for selection.
In some embodiments, electronic device 110 may utilize different video scripts to generate video content. For example, a first video content is generated using a first video script and a second video content is generated using a second video script. In this case, the electronic device 110 may also present the second video content corresponding to the second video script to the editor, and determine whether to ultimately use the first video content or the second video content as the target video content (e.g., video content 140) according to the editor's selection of the first video content or the second video content. After determining the target video content, the electronic device 110 may provide the target video content to the user 130. Thereby, it is possible to simultaneously generate video contents with a plurality of video scripts for selection, and to satisfy the user 130 as much as possible by widening the selection range.
Reference may also be made to fig. 4A and 4B for exemplary purposes. First in process 400A, electronic device 110 may, upon user interaction, generate video content 140 that is used to represent a highlight event of user 130 during a game. At block 410, the electronic device 110 may collect "highlight events" of the user 130 in the game after the user 130 begins to interact. For example, the electronic device 110 may collect the "highlight event" after the user 130 has generated an interactive event that continuously defeats the behavior. At block 420, after the user 130 has completed his game, a highlight edit may be entered. For example, after the acquired "highlight event", video editing is performed by the editing party. At block 430, the electronic device 110 may select a video script based on the "highlight event" of the user 130. (e.g., selected based on the role targeted in the "highlight event"). Further, at block 450, the electronic device 110 may generate video content (e.g., video content 140) using the video script. In some embodiments, prior to block 450, block 440 may also be included, and electronic device 110 may provide the video script to be applied determined by performing block 430 to an editor in order for the editor to adjust the video script to be applied (e.g., adjust the insertion location, duration, etc. of the "highlight event") according to actual needs.
In process 400B, electronic device 110 may first execute block 460 to obtain a video script to be applied. For example, a video script to be applied is selected based on an indication of an editor, historical video content heat information, and so forth. Further, at block 470, the electronic device 110 may instruct the user 130 to interact according to content included in the video script (e.g., instruct the user 130 as to "highlight events" that may be made). Further, at block 480, after the user 130 completes the game, the highlight editing may also be entered. Further, at block 490, electronic device 110 may generate video content (e.g., video content 140) using the video script.
It should be appreciated that in process 400B, editing Fang Duishi of the video script may also be allowed to make adjustments (e.g., after block 480 is performed, before block 490 is performed, allowing the editor to adjust the video script) to obtain better quality video content 140.
Thus, the degree of linkage with the application 120 may be increased, so that the capability of embedding and editing may be coordinated during the interaction of the user 130. Taking the application 120 for providing virtual scene interaction as an example, the application 120 can link the provided content with the video production, so as to enrich the interaction experience of the user 130. In addition, the output efficiency of the editor can be improved in such a way that after a plurality of story lines are provided, the emphasis of the editor is changed from 'creative idea' to 'creative audit', so that the output of the story lines is improved. This may also allow for example the work focus on the interactive platform side to be tuned for monitoring quality and algorithm tuning.
By means of the method, the embodiment of the disclosure can collect the interaction event in the interaction content of the user and generate the video content by using the video script, so that the interaction event of the user can be conveniently and accurately recorded, and the content generation efficiency is improved.
Example procedure
Fig. 5 illustrates a flowchart of an example process 500 for content generation, according to some embodiments of the present disclosure. The process 500 may be implemented at the electronic device 110. Process 500 is described below with reference to fig. 1.
As shown in fig. 5, at block 510, the electronic device 110 determines a video script to be applied. In an embodiment of the present disclosure, a video script is generated for analysis of a set of published video content, the video script indicating at least a type of event that matches a first time period of video to be generated.
At block 520, the electronic device 110 determines an interaction event matching the event type from the interaction content associated with the user.
At block 530, the electronic device 110 generates video content corresponding to the video script based on the interaction event.
In some embodiments, determining the video script to apply comprises: a selection of a video script to apply is received from a set of preset scripts.
In some embodiments, determining the video script to apply comprises: and determining the video script to be applied based on the virtual object corresponding to the user in the interactive content.
In some embodiments, the published video content is associated with a virtual object.
In some embodiments, process 500 further comprises: presenting video content to an editor; and providing the confirmed video content or the edited video content to the user based on the confirmation operation or the editing operation of the editing policy on the video content.
In some embodiments, the video content is first video content corresponding to a first video script, the process 500 further comprising: presenting second video content corresponding to the second video script to the editor; receiving a selection of the editor for the first video content or the second video content; and providing the user with the target video content generated based on the selected video content.
In some embodiments, the video script further indicates at least one of: an audio effect corresponding to a second time period of the video to be generated; a video effect corresponding to a third time period of the video to be generated; a transition effect corresponding to a fourth time period of the video to be generated.
In some embodiments, the interactive content comprises a pair of content, and the video content comprises highlight content associated with the pair of content.
In some embodiments, the video script is generated based on the following process: acquiring released video content by an analysis module; determining a set of target events from the published video content; and constructing a video script corresponding to the published video content based on time information and type information of a set of target events, the time information indicating a time period of the corresponding event in the published video content, the type information indicating an event type of the corresponding event.
In some embodiments, determining a set of target events from published video content includes: a set of target events is determined based on at least one of picture information, audio information, and text information of the published video content.
In some embodiments, determining a set of target events from published video content includes: a set of target events is determined from the published video content based on the popularity information of the published video content, wherein the popularity of the portion of the video content corresponding to the set of target events is greater than a threshold.
In some embodiments, generating a video script corresponding to published video content based on a set of target events includes: a video script is constructed based on time information and type information for a set of target events, the time information indicating a time period of a corresponding event in the published video content and the type information indicating an event type of the corresponding event.
In some embodiments, generating a video script corresponding to published video content based on a set of target events includes: generating input information to a target model based on a set of target events; and generating a video script corresponding to the published video content based on the output information of the object model.
In some embodiments, generating input information to a target model based on a set of target events includes: input information to the target model is generated based on a set of target events and descriptive information associated with the target virtual object.
In some embodiments, the descriptive information includes information indicative of at least one of: scene description information of a virtual scene associated with the target virtual object; role description information about the target virtual object.
In some embodiments, determining the video script to apply comprises: acquiring an initial video script; generating guide information for updating the initial video script based on a preset operation of a user; and acquiring the video script to be applied generated based on the guiding information.
Example apparatus and apparatus
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 6 illustrates a schematic block diagram of an apparatus 600 for content generation according to some embodiments of the present disclosure. The apparatus 600 may be implemented as or included in the electronic device 110. The various modules/components in apparatus 600 may be implemented in hardware, software, firmware, or any combination thereof.
The apparatus 600 includes a script determination module 610 configured to determine a video script to be applied, the video script being generated for analysis of a set of published video content, the video script indicating at least a type of event matching a first time period of the video to be generated. The apparatus 600 further includes an event determination module 620 configured to determine an interaction event matching the event type from the interaction content associated with the user. The apparatus 600 further comprises a content generation module 630 configured to generate video content corresponding to the video script based on the interaction event.
In some embodiments, determining the video script to apply comprises: a selection of a video script to apply is received from a set of preset scripts.
In some embodiments, determining the video script to apply comprises: and determining the video script to be applied based on the virtual object corresponding to the user in the interactive content.
In some embodiments, the published video content is associated with a virtual object.
In some embodiments, the apparatus 600 further comprises: a first providing module configured to present video content to an editor; and providing the confirmed video content or the edited video content to the user based on the confirmation operation or the editing operation of the editing policy on the video content.
In some embodiments, the video content is first video content corresponding to a first video script, the apparatus 600 further comprising: a second providing module configured to present second video content corresponding to a second video script to the editor; receiving a selection of the editor for the first video content or the second video content; and providing the user with the target video content generated based on the selected video content.
In some embodiments, the video script further indicates at least one of: an audio effect corresponding to a second time period of the video to be generated; a video effect corresponding to a third time period of the video to be generated; a transition effect corresponding to a fourth time period of the video to be generated.
In some embodiments, the interactive content comprises a pair of content, and the video content comprises highlight content associated with the pair of content.
In some embodiments, the video script is generated based on the following process: acquiring released video content by an analysis module; determining a set of target events from the published video content; and constructing a video script corresponding to the published video content based on time information and type information of a set of target events, the time information indicating a time period of the corresponding event in the published video content, the type information indicating an event type of the corresponding event.
In some embodiments, determining a set of target events from published video content includes: a set of target events is determined based on at least one of picture information, audio information, and text information of the published video content.
In some embodiments, determining a set of target events from published video content includes: a set of target events is determined from the published video content based on the popularity information of the published video content, wherein the popularity of the portion of the video content corresponding to the set of target events is greater than a threshold.
In some embodiments, generating a video script corresponding to published video content based on a set of target events includes: a video script is constructed based on time information and type information for a set of target events, the time information indicating a time period of a corresponding event in the published video content and the type information indicating an event type of the corresponding event.
In some embodiments, generating a video script corresponding to published video content based on a set of target events includes: generating input information to a target model based on a set of target events; and generating a video script corresponding to the published video content based on the output information of the object model.
In some embodiments, generating input information to a target model based on a set of target events includes: input information to the target model is generated based on a set of target events and descriptive information associated with the target virtual object.
In some embodiments, the descriptive information includes information indicative of at least one of: scene description information of a virtual scene associated with the target virtual object; role description information about the target virtual object.
In some embodiments, the determination module 610 is further configured to: acquiring an initial video script; generating guide information for updating the initial video script based on a preset operation of a user; and acquiring the video script to be applied generated based on the guiding information.
Fig. 7 illustrates a block diagram of an electronic device 700 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 700 illustrated in fig. 7 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 700 shown in fig. 7 may be used to implement the electronic device 110 of fig. 1.
As shown in fig. 7, the electronic device 700 is in the form of a general-purpose electronic device. Components of electronic device 700 may include, but are not limited to, one or more processors or processing units 710, memory 720, storage 730, one or more communication units 740, one or more input devices 750, and one or more output devices 760. The processing unit 710 may be an actual or virtual processor and is capable of performing various processes according to programs stored in the memory 720. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of electronic device 700.
Electronic device 700 typically includes a number of computer storage media. Such a medium may be any available media that is accessible by electronic device 700, including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 720 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 730 may be a removable or non-removable media and may include machine-readable media such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (e.g., training data for training) and may be accessed within electronic device 700.
The electronic device 700 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 7, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 720 may include a computer program product 727 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 740 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 700 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communication connection. Thus, the electronic device 700 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 750 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 760 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 700 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., through the communication unit 740, with one or more devices that enable a user to interact with the electronic device 700, or with any device (e.g., network card, modem, etc.) that enables the electronic device 700 to communicate with one or more other electronic devices, as desired. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above is provided. According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (19)

1. A method of content generation, comprising:
determining a video script to be applied, the video script generated for analysis of a set of published video content, the video script indicating at least an event type matching a first time period of the video to be generated;
determining an interaction event matching the event type from the interaction content associated with the user; and
and generating video content corresponding to the video script based on the interaction event.
2. The method of claim 1, wherein determining a video script to apply comprises:
A selection of the video script to be applied is received from a set of preset scripts.
3. The method of claim 1, wherein determining a video script to apply comprises:
and determining the video script to be applied based on the virtual object corresponding to the user in the interactive content.
4. The method of claim 3, wherein the published video content is associated with the virtual object.
5. The method of claim 1, further comprising:
presenting the video content to an editor; and
the confirmed video content or the edited video content is provided to the user based on a confirmation operation or an editing operation of the editor with respect to the video content.
6. The method of claim 5, wherein the video content is first video content corresponding to a first video script, the method further comprising:
presenting second video content corresponding to a second video script to the editor;
receiving a selection of the first video content or the second video content by the editor; and
the user is provided with target video content generated based on the selected video content.
7. The method of claim 1, the video script further indicating at least one of:
an audio effect corresponding to a second time period of the video to be generated;
a video effect corresponding to a third time period of the video to be generated;
and a transition effect corresponding to the fourth time period of the video to be generated.
8. The method of claim 1, wherein the interactive content comprises a contrast content, the video content comprising highlight content associated with the contrast content.
9. The method of claim 1, wherein the video script is generated based on the following process:
acquiring released video content by an analysis module;
determining a set of target events from the published video content; and
based on the set of target events, a video script corresponding to the published video content is generated.
10. The method of claim 9, wherein determining a set of target events from the published video content comprises:
the set of target events is determined based on at least one of picture information, audio information, and text information of the published video content.
11. The method of claim 9, wherein determining a set of target events from the published video content comprises:
And determining the set of target events from the published video content based on the heat information of the published video content, wherein the heat of the video content part corresponding to the set of target events is greater than a threshold value.
12. The method of claim 9, wherein generating a video script corresponding to the published video content based on the set of target events comprises:
the video script is constructed based on time information and type information of the set of target events, the time information indicating a time period of a corresponding event in the published video content, and the type information indicating an event type of the corresponding event.
13. The method of claim 9, wherein generating a video script corresponding to the published video content based on the set of target events comprises:
generating input information to a target model based on the set of target events; and
and generating the video script corresponding to the released video content based on the output information of the target model.
14. The method of claim 13, wherein generating input information to a target model based on the set of target events comprises:
The input information to the target model is generated based on the set of target events and descriptive information associated with a target virtual object.
15. The method of claim 14, wherein the descriptive information includes information indicative of at least one of:
scene description information of a virtual scene associated with the target virtual object;
character description information about the target virtual object.
16. The method of claim 1, wherein determining a video script to apply comprises:
acquiring an initial video script;
generating guide information for updating the initial video script based on the preset operation of the user; and
and acquiring the video script to be applied, which is generated based on the guiding information.
17. An apparatus for content generation, comprising:
a script determination module configured to determine a video script to be applied, the video script generated for analysis of a set of published video content, the video script indicating at least an event type matching a first time period of the video to be generated;
an event determination module configured to determine an interaction event matching the event type from the interaction content associated with the user; and
And the content generation module is configured to generate video content corresponding to the video script based on the interaction event.
18. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit cause the electronic device to perform the method of any one of claims 1 to 16.
19. A computer readable storage medium having stored thereon a computer program executable by a processor to implement the method of any of claims 1 to 16.
CN202311651164.6A 2023-12-04 2023-12-04 Method, device, equipment and storage medium for generating content Pending CN117641072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311651164.6A CN117641072A (en) 2023-12-04 2023-12-04 Method, device, equipment and storage medium for generating content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311651164.6A CN117641072A (en) 2023-12-04 2023-12-04 Method, device, equipment and storage medium for generating content

Publications (1)

Publication Number Publication Date
CN117641072A true CN117641072A (en) 2024-03-01

Family

ID=90028502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311651164.6A Pending CN117641072A (en) 2023-12-04 2023-12-04 Method, device, equipment and storage medium for generating content

Country Status (1)

Country Link
CN (1) CN117641072A (en)

Similar Documents

Publication Publication Date Title
WO2021244205A1 (en) Interaction scenario start up method, apparatus, storage medium, client end, and server
CN106998494B (en) Video recording method and related device
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
US9055193B2 (en) System and method of a remote conference
CN110992993A (en) Video editing method, video editing device, terminal and readable storage medium
WO2019114330A1 (en) Video playback method and apparatus, and terminal device
CN111556329B (en) Method and device for inserting media content in live broadcast
CN111800668A (en) Bullet screen processing method, device, equipment and storage medium
CN113672748A (en) Multimedia information playing method and device
CN113824983B (en) Data matching method, device, equipment and computer readable storage medium
CN113852767B (en) Video editing method, device, equipment and medium
CN114339076A (en) Video shooting method and device, electronic equipment and storage medium
EP3528151A1 (en) Method and apparatus for user authentication
CN111343508B (en) Information display control method and device, electronic equipment and storage medium
WO2017165253A1 (en) Modular communications
US20170155943A1 (en) Method and electronic device for customizing and playing personalized programme
CN115237314B (en) Information recommendation method and device and electronic equipment
CN117641072A (en) Method, device, equipment and storage medium for generating content
KR101379662B1 (en) Apparatus and method for making effect of event in game
CN111918140B (en) Video playing control method and device, computer equipment and storage medium
CN112423099A (en) Video loading method and device and electronic equipment
CN108989703B (en) Memory video creating method and related device
CN113556486B (en) Video generation method, device, electronic equipment and storage medium
KR20200051754A (en) Content distribution system and content distribution method
US20230154184A1 (en) Annotating a video with a personalized recap video based on relevancy and watch history

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination