CN117041646A - Method, apparatus, device and storage medium for generating media content - Google Patents

Method, apparatus, device and storage medium for generating media content Download PDF

Info

Publication number
CN117041646A
CN117041646A CN202310981654.6A CN202310981654A CN117041646A CN 117041646 A CN117041646 A CN 117041646A CN 202310981654 A CN202310981654 A CN 202310981654A CN 117041646 A CN117041646 A CN 117041646A
Authority
CN
China
Prior art keywords
target
media content
description
item
description item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310981654.6A
Other languages
Chinese (zh)
Inventor
韩天磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310981654.6A priority Critical patent/CN117041646A/en
Publication of CN117041646A publication Critical patent/CN117041646A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

Embodiments of the present disclosure provide methods, apparatuses, devices, and storage medium for generating media content. The method comprises the following steps: presenting a configuration interface comprising at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time; determining at least one target description item based on the interaction in the configuration interface; and rendering the target media content generated based on the at least one target description item. In this way, the embodiment of the disclosure can promote the individuation of the target media content generated for different users, and can present different media content for different users, thereby being beneficial to promoting the browsing experience of the users on the media content.

Description

Method, apparatus, device and storage medium for generating media content
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, more particularly, relate to a method, apparatus, device, and computer-readable storage medium for generating media content.
Background
With the development of the computer level, various forms of electronic devices can greatly enrich the daily life of people. For example, people may utilize electronic devices to perform various interactions.
In some interaction scenarios, the electronic device may present the user with media content associated with historical behavior within the target platform, which may be, for example, a weekly report, a monthly report, a annual summary, and so forth. The form of the media content includes, for example, but is not limited to, pictures, audio, video, text, and the like. A browsing experience of this media content is desired.
Disclosure of Invention
In a first aspect of the present disclosure, a method of generating media content is provided. The method comprises the following steps: presenting a configuration interface comprising at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time; determining at least one target description item based on the interaction in the configuration interface; and rendering the target media content generated based on the at least one target description item.
In a second aspect of the present disclosure, an apparatus for generating media content is provided. The device comprises: an interface presentation module configured to present a configuration interface comprising at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time; a description item determination module configured to determine at least one target description item based on the interaction in the configuration interface; and a content presentation module configured to present the target media content generated based on the at least one target description item.
In a third aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to implement the method of the first aspect.
It should be understood that what is described in this section of the disclosure is not intended to limit key features or essential features of the embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
2A-2F illustrate schematic diagrams of examples of configuration interfaces according to some embodiments of the present disclosure;
3A-3B illustrate schematic diagrams of examples of pictures included in target media content according to some embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of a process of generating target media content according to some embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of a process of generating a guide according to some embodiments of the present disclosure;
FIG. 6 illustrates a flowchart of a process for generating media content, according to some embodiments of the present disclosure;
FIG. 7 illustrates a schematic block diagram of an apparatus for generating media content in accordance with certain embodiments of the present disclosure; and
fig. 8 illustrates a block diagram of an apparatus capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that any section/subsection headings provided herein are not limiting. Various embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, the embodiments described in any section/subsection may be combined in any manner with any other embodiment described in the same section/subsection and/or in a different section/subsection.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
In this context, unless explicitly stated otherwise, performing a step "in response to a" does not mean that the step is performed immediately after "a", but may include one or more intermediate steps.
It will be appreciated that the data (including but not limited to the data itself, the acquisition, use, storage or deletion of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the relevant users, which may include any type of rights subjects, such as individuals, enterprises, groups, etc., should be informed and authorized by appropriate means of the types of information, usage ranges, usage scenarios, etc. involved in the present disclosure according to relevant legal regulations.
For example, in response to receiving an active request from a user, prompt information is sent to the relevant user to explicitly prompt the relevant user that the operation requested to be performed will need to obtain and use information to the relevant user, so that the relevant user may autonomously select whether to provide information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operation of the technical solution of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation manner, in response to receiving an active request from a relevant user, the prompt information may be sent to the relevant user, for example, in a popup window, where the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
As mentioned briefly above, the electronic device may present media content associated with historical behavior within the target platform to the user. Traditionally, this media content may be formulated, for example, by a worker of a target platform (e.g., application) in the electronic device based on historical behavior of the user within the target platform. In this case, the content structure of the media content formulated by the staff is relatively fixed. This would make the media content for different users unchanged except for the number and the profile. Because of the different historical behavior of different users within the target platform, different users may desire to present media content with different emphasis. Conventional media content formulated by staff may not meet the personalized needs of different users for the media content, which may affect the browsing experience of different users for the media content.
Embodiments of the present disclosure propose an improved scheme for generating media content. According to various embodiments of the present disclosure, a configuration interface is presented that includes at least one candidate description item for generating media content. At least one target description item is determined based on the interaction in the configuration interface. Target media content generated based on the at least one target description item is presented. Therefore, the embodiment of the invention can promote the individuation of the target media content generated for different users, can present different media content for different users, and is beneficial to promoting the browsing experience of the users on the media content.
Various example implementations of the scheme are described in further detail below in conjunction with the accompanying drawings. To illustrate the principles and concepts of the embodiments of the disclosure, some of the following description will refer to the field of gaming. It will nevertheless be understood that this is merely exemplary and is not intended to limit the scope of the disclosure in any way. The embodiment of the disclosure can be applied to various fields of simulation, virtual reality, augmented reality and the like.
Example Environment
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. As shown in fig. 1, an example environment 100 may include a terminal device 110.
In this example environment 100, a terminal device 110 may be running an application 120 that supports virtual scenarios. The application 120 may be any suitable type of application for rendering a virtual scene, examples of which may include, but are not limited to: simulation applications, gaming applications, virtual reality applications, augmented reality applications, and the like, embodiments of the disclosure are not limited in this respect. Where the application 120 is a gaming application, it includes, but is not limited to, a first person shooter game (FPS), a multiplayer online tactical game (MOBA) game, a simulated strategic game (SLG), a simulated business game, and so forth. The user 140 may interact with the application 120 via the terminal device 110 and/or its attached device.
In the environment 100 of fig. 1, if the application 120 is in an active state, the terminal device 110 may present an interface 150 associated with the virtual scene through the application 120. At least one screen associated with the virtual scene may be presented in the interface 150. The at least one screen may include a screen associated with a virtual object corresponding to the current user, a screen associated with a virtual object corresponding to other users, a screen corresponding to a non-player character, a screen associated with a place in a virtual scene, and the like. Illustratively, the interface 150 may be a game application interface to present a corresponding game scene. Alternatively, the interface 150 may be another suitable type of interactive interface, which may support the user to control the virtual objects in the interface to perform corresponding actions in the virtual scene.
In some embodiments, terminal device 110 communicates with server 130 to enable provisioning of services for application 120. The terminal device 110 may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, palmtop computer, portable gaming terminal, VR/AR device, personal communication system (Personal Communication System, PCS) device, personal navigation device, personal digital assistant (Personal Digital Assistant, PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 110 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.).
The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network, basic cloud computing services such as big data and an artificial intelligence platform. Server 130 may include, for example, a computing system/server, such as a mainframe, edge computing node, computing device in a cloud environment, and so on. The server 130 may provide a background service for the application 120 supporting the virtual scene in the terminal device 110.
A communication connection may be established between the server 130 and the terminal device 110. The communication connection may be established by wired means or wireless means. The communication connection may include, but is not limited to, a bluetooth connection, a mobile network connection, a universal serial bus (Universal Serial Bus, USB) connection, a wireless fidelity (Wireless Fidelity, wiFi) connection, etc., as embodiments of the disclosure are not limited in this respect. In an embodiment of the present disclosure, the server 130 and the terminal device 110 may implement signaling interaction through a communication connection therebetween.
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure.
Some example embodiments of the present disclosure will be described below with continued reference to the accompanying drawings.
Example interface
To more intuitively represent the interaction mechanism in the interface, a game scenario is taken as an example to describe below, and embodiments of the present disclosure may enable a user to understand the corresponding interaction principle by presenting an example interface.
Fig. 2A-2F illustrate schematic diagrams of examples of configuration interfaces according to some embodiments of the present disclosure. In some embodiments, terminal device 110 may present the configuration interface shown in fig. 2A-2F in response to receiving a view request for media content. For example, in a game scenario, terminal device 110 may determine to receive a view request for media content in response to receiving an operation associated with viewing a war report, thereby presenting a configuration interface as shown in fig. 2A-2F.
It should be understood that the configuration interfaces shown in fig. 2A-2F are merely example interfaces, and that various interface designs may exist in practice. The individual graphical elements in the interface may have different arrangements and different visual representations, one or more of which may be omitted or replaced, and one or more other elements may also be present. Embodiments of the disclosure are not limited in this respect.
In an embodiment of the present disclosure, terminal device 110 may present a configuration interface that includes at least one candidate description item for generating media content. As shown in fig. 2A, terminal device 110 may present a configuration interface 200A. The configuration interface 200A includes a plurality of candidate description items, which are a description item 201, a description item 202, a description item 203, and a description item 204, respectively. The plurality of candidate descriptions herein relate to historical behavior of the current user within the target platform over a predetermined period of time. The target platform herein may be, for example, an application (e.g., application 120), a web page, an applet, etc., and will be described below by way of example. The current user is the user 140 who is matched with the terminal device 110. The preset time period can be set by the current user or preset by the staff of the target platform. The predetermined period of time may be, for example, one day, one week, one month, one quarter, or one season time in a game, etc., which is not limited by the present disclosure and is exemplified below by only one week.
In some embodiments, at least one candidate description in the configuration interface may be preset by the relevant staff of the target platform and/or preset by the current user. In some embodiments, at least one candidate description term herein may also be generated based on an analysis of historical behavior. In particular, at least one candidate description herein may be automatically generated by analyzing historical behavior using an analysis system installed on terminal device 110 and/or on server 130 that provides background services to the target platform.
The analysis system may generate a plurality of media content describing aspects of the historical behavior based on the current user's historical behavior within the target platform over a predetermined period of time. The analysis system may in turn score the plurality of media content (e.g., score based on the level of sophistication of each of the plurality of media content) and sort the plurality of media content in descending order based on the scoring result. The analysis system may then determine a set of media content (i.e., a higher-priced set of media content) with the ranking result in front based on the ranking result. The analysis system finally generates at least one candidate description item matching the set of media content based on the set of media content. For example, the analytics system may generate a set of media content based on historical behavior within the gaming application for the current user for a week. The set of media content may be associated with, for example, a win or lose ratio, a highlight moment, an interaction with a game friend, and the like, respectively. The analysis system may generate at least one candidate description in response to determining that the current user has a higher current wining rate and better interaction.
Terminal device 110 may determine at least one object description item based on the interaction in the configuration interface. The at least one target description item is at least one candidate description item after adjustment. In the case where the at least one candidate description is a plurality of candidate descriptions, the interaction herein may include, for example, adjusting an order of the plurality of candidate descriptions. In some embodiments, the terminal device 110 may determine to adjust the ordering of the particular candidate description item among the plurality of candidate description items to adjust the order of the plurality of candidate description items in response to detecting a drag operation on the particular candidate description item among the plurality of candidate description items. As shown in fig. 2B, the terminal device 110 may adjust the ordering of the description items 203 among the plurality of candidate description items to adjust the order of the plurality of candidate description items in response to detecting an upward drag operation of the description item 203 in the configuration interface 200B by the current user. The terminal device 110 may, for example, in response to detecting a stop of the drag behavior, determine an ordering of the description item 203 among the plurality of candidate description items, and further determine an ordering of the plurality of candidate description items and present the configuration interface 200C as shown in fig. 2C. It is to be understood that the upward drag operation herein is merely an example, and a downward drag operation is also possible. The order of the plurality of candidate descriptions shown in configuration interface 200C after adjustment is also merely an example, and the plurality of candidate descriptions may be in other orders. It is understood that in addition to drag operations, the terminal device 110 may also adjust the order of the plurality of candidate descriptions in response to any other suitable operation, which is not limited by the present disclosure.
In some embodiments, the interaction herein may further comprise, for example, modifying a text representation of a particular candidate description item of the at least one candidate description item. In some embodiments, terminal device 110 may determine that an adjustment request for a particular candidate description item is received, for example, in response to detecting a click operation on the particular candidate description item of the at least one candidate description item.
In some embodiments, the adjustment request herein may include, for example, a modification request. As shown in fig. 2D, terminal device 110 may present at least one adjustment control in response to detecting a click operation on description item 203 in configuration interface 200D. The at least one adjustment control herein may include, for example, a modification control 230. Terminal device 110 can in turn allow the current user to modify description item 203 in response to detecting a trigger operation to modification control 230. In some embodiments, terminal device 110 may determine modified descriptive item 203' and present configuration interface 200E as shown in fig. 2E, for example, based on user input received in descriptive item 203. Description item 203 in configuration interface 200E is updated to be description item 203'.
In some embodiments, the adjustment request may also include, for example, a modification request. As shown in fig. 2D, a delete control 240 may also be included, for example, in the configuration interface 200D presented by the terminal device 110 in response to a click operation on the descriptive item 203. In the event that at least one candidate description item includes a plurality of candidate description items, terminal device 110 can delete description item 203 from the plurality of candidate description items and present configuration interface 200F as shown in fig. 2F in response to detecting a trigger operation to delete control 240. The plurality of candidate description items presented by configuration interface 200F includes only description item 201, description item 202, and description item 204. In some embodiments, the terminal device 110 may determine that a delete operation for the description item 203 is received and delete the description item 203, for example, in response to detecting a slide operation to the left or a slide operation to the right for the description item 203. It will be appreciated that terminal device 110 may also determine that a modification request or a deletion request for a particular candidate description item is received in response to any other suitable operation.
In some embodiments, the interoperation herein may further include refreshing at least one candidate description item, for example. As shown in fig. 2A, configuration interface 200A may also include refresh control 210. The terminal device 110 may refresh at least one candidate description item in response to detecting a trigger operation to the refresh control. The plurality of candidate descriptions presented by configuration interface 200A after a refresh is at least partially different from the plurality of candidate descriptions presented before a refresh. For example, if the plurality of candidate description items presented before refreshing are description item a, description item B, description item C, and description item D, the plurality of candidate description items presented after refreshing may be description item E, description item F, description item C, and description item G.
It will be appreciated that in addition to interworking, the terminal device 110 may also adjust at least one candidate description item to determine a target description item in other suitable ways. Illustratively, in configuration interface 200A, terminal device 110 may, in response to detecting the voice "delete description item 203", delete description item 203 and present configuration interface 200F.
In some embodiments, the configuration interface may also include input controls. Terminal device 110 may also determine at least one target description item with respect to input information obtained via such input controls. Such an input control may be, for example, an input box. The terminal device 110 may determine at least one object description item in response to receiving text information entered by a user in an input box. For example, the terminal device 110 may determine the text information "the friend interaction situation of the week" and the text information "the newly added hero of the week" as the at least one target description item in response to receiving the text information "the friend interaction situation of the week" and the text information "the newly added hero of the week" input by the user in the input box. Such input controls may also be microphone controls, for example. The terminal device 110 may obtain the voice information of the user in response to receiving the click operation of the microphone control by the user. The terminal device 110 may then determine at least one target description item based on the acquired voice information. It will be appreciated that the input control may be any other suitable style, and that the input information obtained via the input control may be any form of information, which is not limited by the present disclosure.
Further, the terminal device 110 may present the target media content generated based on the at least one target description item. In some embodiments, the terminal device 110 may present the targeted multimedia content in response to detecting the multimedia content presentation request. As shown in fig. 2A, configuration interface 200A may also include a continue control 220. The terminal device 110 may determine that the current user completes modification of the at least one candidate description item and determine the modified at least one target description item in response to detecting the trigger operation of the continue control 220. The terminal device 110 may in turn present the target media content generated based on the at least one target description item. It will be appreciated that, similar to the interactive operation, the terminal device 110 may also determine that a multimedia content presentation request is received in other ways. For example, terminal device 110 may present multimedia content in response to detecting voice "continue".
In some embodiments, the at least one target description item herein includes a plurality of target description items, and the target media content herein includes a plurality of content portions corresponding to the plurality of target description items for describing aspects of historical behavior. In some embodiments, since multiple object descriptions are associated with multiple aspects of historical behavior, the multiple content portions herein may be in a one-to-one correspondence with multiple object descriptions. For example, if 4 target description items are included, the target media content may include 4 content parts, which are used to describe the corresponding target description items, respectively.
In some embodiments, the target media content presented by terminal device 110 includes a plurality of pictures corresponding to a plurality of target description items, wherein each picture is generated based on the corresponding target description item. Fig. 3A-3B illustrate schematic diagrams of examples of pictures included in target media content according to some embodiments of the present disclosure. As shown in fig. 3A and 3B, a picture 300A may be generated based on the description item 201, for example, and a picture 300B may be generated based on the description item 202, for example.
In some embodiments, the target media content may present a picture switch hint information. The picture switching prompt information is used for prompting a user to switch the browsed picture so as to view that the target media content is other content. As shown in fig. 3A, the picture 300A may include, for example, a hint information 330. The terminal device 110 may switch to rendering the picture 300B, for example, in response to receiving a slide-up operation by the user for the picture 300A.
In some embodiments, terminal device 110 may switch to presenting a configuration interface containing at least one target description item based on a current user's return request for target media content. In some embodiments, terminal device 110 may determine that a return request was received in response to detecting a trigger operation to a return control included with the target media content, and switch to presenting the configuration interface. As shown in fig. 3A, the picture 300A may also include a return control 310, for example. The terminal device 110 may, for example, switch to presenting a configuration interface including at least one target description item in response to detecting a trigger operation for the return control 310.
In some embodiments, terminal device 110 may also cause the targeted media content to be shared to a particular user and/or a particular platform based on the current user's sharing request for the targeted media content. In some embodiments, the terminal device 110 may determine that the sharing request is received in response to detecting a triggering operation of a sharing control included in the target media content, so that the target media content is shared to a specific user and/or a specific platform. As shown in fig. 3A and 3B, the pictures 300A and 300B may also include, for example, a sharing control 340. The terminal device 110 may cause the targeted media content to be shared to a particular user and/or a particular platform, for example, in response to detecting a trigger operation for the sharing control 340.
In some embodiments, multiple sharing controls may be included in the picture included in the target media content, and the roles of the multiple sharing controls may be the same or different. As shown in fig. 3A, the picture 300A may also include a sharing control 320. The terminal device 110 may, for example, in response to detecting a trigger operation for the sharing control 320, only the picture 300A currently presented by the terminal device 110 is shared, i.e., only the picture 300A is shared to a particular user and/or a particular platform.
Thus, terminal device 110 is capable of automatically generating at least one candidate description for generating media content based on the current user's historical behavior within the target platform. The terminal device 110 may also determine at least one target description item from the at least one candidate description item based on the current user's interaction with the at least one candidate description item, and present target media content corresponding to the at least one target description item. In this way, the individuation of the target media content generated for different users can be improved, different media contents can be presented for different users, and the browsing experience of the users on the media contents can be improved.
Generation of example target media content
The user interaction and the different content presented by the terminal device 110 are described above in connection with fig. 2A to 3B, and the generation of the target media content described above is described below in connection with fig. 4 to 5. The target media content may be generated locally by the terminal device 110 or may be generated by the server 130. The terminal device 110 may send the at least one target description item to the server 130 and obtain the target media content generated for the at least one target description item from the server 130. For convenience of description, the electronic device that generates the target media content is referred to below as a target device. In some embodiments, the target device may utilize any suitable means to generate the target media content. For example, the target device may utilize at least one machine learning model to generate the target media content. Further, the following description is exemplary in terms of at least one object description item including a plurality of object description items with reference to the accompanying drawings. It will be appreciated that in the event that the at least one target description item includes only one target description item, the target device may still generate the target media content in any suitable manner.
Fig. 4 illustrates a schematic diagram of an example architecture 400 for generating targeted media content, according to some embodiments of the present disclosure. As shown in fig. 4, the target device may obtain a plurality of target description items 401 (including target description items 401-1, 401-2, … …, 401-N, where N is a positive integer). The target device may generate a set of guidance items 402 (including guidance items 402-1, 402-2, … …, 402-N, where N is a positive integer) based on the plurality of target description items 401. Each of the set of guide items 402 is in a one-to-one correspondence with each of the plurality of object description items 401, i.e., the number of guide items 402 in a set is the same as the number of object description items 401.
Regarding the particular manner in which the set of guide items 402 is generated, in some embodiments, for a first description item of the plurality of target description items 401, the target device may determine a plurality of preset guide items corresponding to the first description item. The plurality of preset guidance items may be a plurality of preset guidance items stored in the target device in advance. For example, if the first description item is the global performance of the week, the target device may store a plurality of preset guidance items such as "the global performance of the week is excellent, the winning rate exceeds a majority of net friends, the hero of a standing posture", "the global performance of the week is poor, the winning rate is lower than a majority of net friends, the hero of a kneeling posture" and the like in advance. The target device then determines a first guide item corresponding to the first description item from a plurality of preset guide items based on the historical behavior of the current user. This first guide may be the guide of the set of guides 402 that corresponds to the first description term. For example, if the historical behavior in the target platform indicates that the current user has a higher rate of Zhou Sheng in a week, the target device may determine, from a plurality of preset directives, that the guidance item matching the target description item "present week overall performance" is "present week overall performance is excellent, and the winning rate exceeds that of most of net friends, heroes of a standing posture".
In some embodiments, the set of guidance items 402 may also be generated by the target device using a machine learning model. For a second description item of the plurality of target description items, the target device may generate input information to the second model based on the second description item and the historical behavior. The second description item may be the same description item as the first description item, or may be a different description item, which is not limited by the present disclosure. The second model here may be, for example, a language model. Language models include, but are not limited to, feed Forward Neural Networks (FNNs), convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), and the like. The input model may output a second guide corresponding to the input information. The target device may obtain a second guidance item generated based on the input information from the second model. This second guide may be the guide of the set of guides 402 that corresponds to the second description. Fig. 5 illustrates a schematic diagram of an example architecture 500 for generating a guide according to some embodiments of the present disclosure. As shown in fig. 5, the object description item 501 and the history behavior 502 may be provided together as input information to the second model 510. The second model 510 may output the guidance item 503 corresponding to the target guidance item 501. By thus generating the guidance items using the machine learning model, the target device can sequentially generate a set of guidance items 402 corresponding to the plurality of target description items 401, respectively, using the second model 510.
Referring back to fig. 4. A set of guide items 402 will be provided by the target device to the first model 410. The first model 410 may generate target media content 404 that matches the plurality of target description items 401 based on the input set of guide items 402. The first model 410 here may be, for example, an image generation model. Image generation models include, but are not limited to, feed Forward Neural Networks (FNNs), convolutional Neural Networks (CNNs), generating antagonistic neural networks (GANs), and the like.
In some embodiments, the target device may also provide the image resources 403 associated with the current user to the first model 410 for the first model 410 to generate the target media content. The image resource 403 may include, for example, at least one of an image identification of the current user and an image of at least one virtual character within the target platform. The image identifier here includes, but is not limited to, a user head portrait of the current user, a picture taken by the user (e.g., a user self-timer), and so on. At least one virtual character is determined based on the current user's historical behavior. For example, the at least one virtual character herein may be a virtual character that is used by the current user more than a preset threshold number of times within a predetermined period of time. The target device may determine such at least one avatar as a common avatar for the current user. For example, if the image resource 403 includes a virtual character a, and the guidance item 402 input to the first model 410 is "the overall performance is poor in the week, the winning rate is lower than that of most net friends, a hero in a kneeling position", the first model 410 may output the target media content 404 including the virtual character a in a kneeling position. If the image resource 403 includes a self-timer of the current user and a virtual character B, and the guidance item 402 input to the first model 410 is "hero with a winning rate exceeding most net friends and one standing position", the first model 410 may output the target media content 404 including the virtual character B with one standing position and a face image of the current user self-timer.
Thus, the target device may automatically determine a corresponding set of guide terms based on the plurality of target description terms obtained. The target device may in turn generate target media content corresponding to the plurality of target description items based on the set of guide items. In this way, the target media content can be conveniently and rapidly generated, and the efficiency of generating the target media content can be improved.
Example procedure
Fig. 6 illustrates a flow chart of a process 600 for generating media content according to some embodiments of the present disclosure. Process 600 may be implemented at terminal device 110. The process 600 is described below with reference to fig. 1.
At block 610, the terminal device 110 presents a configuration interface including at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time.
At block 620, terminal device 110 determines a plurality of object description items based on the interaction in the configuration interface.
At block 630, the terminal device 110 presents the target media content generated based on the plurality of target description items.
In some embodiments, the at least one target description item includes a plurality of target description items, and the target media content includes a plurality of content portions corresponding to the plurality of target description items for describing aspects of the historical behavior.
In some embodiments, the target media content includes a plurality of pictures corresponding to a plurality of target descriptions, wherein each picture is generated based on the corresponding target description.
In some embodiments, at least one candidate description is generated based on an analysis of historical behavior.
In some embodiments, the interoperation includes at least one of: adjusting the order of at least one candidate description item; refreshing at least one candidate description item; a textual representation of a particular candidate description item of the at least one candidate description item is modified.
In some embodiments, the target media content includes a plurality of pictures corresponding to a plurality of target descriptions, wherein each picture is generated based on the corresponding target description.
In some embodiments, the target media content is generated based on the following process: determining, by the target device, a set of guide items, the set of guide items being generated based at least on the plurality of target description items; and providing the first model with a set of guides to obtain target media content generated by the first model based on the set of guides.
In some embodiments, determining, by the target device, the set of guidance items comprises: for a first description item of a plurality of target description items: determining a plurality of preset guide items corresponding to the first description items; and determining a first guide item from a plurality of preset guide items based on the historical behaviors of the current user to serve as a guide item corresponding to the first description item in a group of guide items.
In some embodiments, determining, by the target device, the set of guidance items comprises: for a second description item of the plurality of target description items: generating input information to the second model based on the second descriptive term and the historical behavior; a second guide generated based on the input information is obtained from the second model as a guide corresponding to the second description item in the set of guides.
In some embodiments, the first model is an image generation model and the second model is a language model.
In some embodiments, the process 600 further comprises: image resources associated with the current user are provided to the first model for use in generating the target media content by the first model.
In some embodiments, the image resources include at least one of: an image identifier of a current user; an image of at least one virtual character within the target platform, the at least one virtual character being determined based on historical behavior of the current user.
In some embodiments, the process 600 further comprises: the target media content is shared to a particular user and/or a particular platform based on a sharing request of the current user for the target media content.
In some embodiments, the configuration interface further includes an input control, and determining at least one target description item based on the interaction in the configuration interface includes: at least one target description item is determined based on input information obtained via the input control.
Example apparatus and apparatus
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 7 illustrates a schematic block diagram of an apparatus 700 for generating media content according to some embodiments of the present disclosure. The apparatus 700 may be implemented as or included in the terminal device 110 and/or the server 130. The various modules/components in apparatus 700 may be implemented in hardware, software, firmware, or any combination thereof.
As shown in fig. 7, apparatus 700 includes an interface presentation module 710 configured to present a configuration interface including at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time. The apparatus 700 further comprises a description item determination module 720 configured to determine a plurality of target description items based on the interactions in the configuration interface. The apparatus 700 further includes a content presentation module 730 configured to present target media content generated based on the plurality of target description items.
In some embodiments, the at least one target description item includes a plurality of target description items, and the target media content includes a plurality of content portions corresponding to the plurality of target description items for describing aspects of the historical behavior.
In some embodiments, the target media content includes a plurality of pictures corresponding to a plurality of target descriptions, wherein each picture is generated based on the corresponding target description.
In some embodiments, at least one candidate description is generated based on an analysis of historical behavior.
In some embodiments, the interoperation includes at least one of: adjusting the order of at least one candidate description item; refreshing at least one candidate description item; a textual representation of a particular candidate description item of the at least one candidate description item is modified.
In some embodiments, the target media content includes a plurality of pictures corresponding to a plurality of target descriptions, wherein each picture is generated based on the corresponding target description.
In some embodiments, the apparatus 700 further comprises a content generation module configured to: determining, by the target device, a set of guide items, the set of guide items being generated based at least on the plurality of target description items; and providing the first model with a set of guides to obtain target media content generated by the first model based on the set of guides.
In some embodiments, the content generation module is further configured to: for a first description item of a plurality of target description items: determining a plurality of preset guide items corresponding to the first description items; and determining a first guide item from a plurality of preset guide items based on the historical behaviors of the current user to serve as a guide item corresponding to the first description item in a group of guide items.
In some embodiments, the content generation module is further configured to: for a second description item of the plurality of target description items: generating input information to the second model based on the second descriptive term and the historical behavior; a second guide generated based on the input information is obtained from the second model as a guide corresponding to the second description item in the set of guides.
In some embodiments, the first model is an image generation model and the second model is a language model.
In some embodiments, the apparatus 700 further comprises: an image resource providing module is configured to provide image resources associated with a current user to the first model for the first model to generate target media content.
In some embodiments, the image resources include at least one of: an image identifier of a current user; an image of at least one virtual character within the target platform, the at least one virtual character being determined based on historical behavior of the current user.
In some embodiments, the apparatus 700 further comprises: and the sharing module is configured to enable the target media content to be shared to a specific user and/or a specific platform based on the sharing request of the current user for the target media content.
In some embodiments, the configuration interface further includes an input control, and the descriptive term determination module 720 includes: the system further includes a target description determining module configured to determine at least one target description based on input information acquired via the input control.
The modules included in apparatus 700 may be implemented in a variety of ways, including software, hardware, firmware, or any combination thereof. In some embodiments, one or more modules may be implemented using software and/or firmware, such as machine-executable instructions stored on a storage medium. In addition to or in lieu of machine-executable instructions, some or all of the modules in apparatus 700 may be implemented at least in part by one or more hardware logic components. By way of example and not limitation, exemplary types of hardware logic components that can be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Fig. 8 illustrates a block diagram of an electronic device 800 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 800 illustrated in fig. 8 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 800 illustrated in fig. 8 may be used to implement the terminal device 110 and/or the server 130 of fig. 1.
As shown in fig. 8, the electronic device 800 is in the form of a general-purpose electronic device. Components of electronic device 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage device 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860. The processing unit 810 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 820. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device 800.
Electronic device 800 typically includes multiple computer storage media. Such a medium may be any available media that is accessible by electronic device 800, including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 830 may be a removable or non-removable medium and may include a machine-readable medium such as a flash drive, a magnetic disk, or any other medium that may be capable of storing information and/or data (e.g., training data for training) and that may be accessed within electronic device 800.
The electronic device 800 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 8, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 820 may include a computer program product 825 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 840 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 800 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communications connection. Thus, the electronic device 800 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 850 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 860 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with the electronic device 800, or with any device (e.g., network card, modem, etc.) that enables the electronic device 800 to communicate with one or more other electronic devices, as desired, via the communication unit 840. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above is provided. According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (16)

1. A method of generating media content, comprising:
presenting a configuration interface comprising at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time;
determining at least one target description item based on the interaction in the configuration interface; and
rendering target media content generated based on the at least one target description item.
2. The method of claim 1, wherein the at least one target description item comprises a plurality of target description items, and the target media content comprises a plurality of content portions corresponding to the plurality of target description items for describing aspects of the historical behavior.
3. The method of claim 2, wherein the target media content comprises a plurality of pictures corresponding to the plurality of target descriptions, wherein each picture is generated based on the corresponding target description.
4. The method of claim 1, wherein the at least one candidate description is generated based on an analysis of the historical behavior.
5. The method of claim 1, wherein the interoperation comprises at least one of:
adjusting the order of the at least one candidate description item;
refreshing the at least one candidate description item;
the text representation of a particular candidate description item of the at least one candidate description item is modified.
6. The method of claim 1, wherein the target media content is generated based on:
determining, by the target device, a set of guidance items, the set of guidance items generated based at least on the at least one target description item; and
the set of guides is provided to a first model to obtain the target media content generated by the first model based on the set of guides.
7. The method of claim 6, wherein determining, by a target device, the set of guidance items comprises:
For a first description item of the at least one target description item:
determining a plurality of preset guide items corresponding to the first description items; and
and determining a first guide item from the plurality of preset guide items based on the historical behaviors of the current user to serve as a guide item corresponding to the first description item in the group of guide items.
8. The method of claim 6, wherein determining, by the target device, a set of guidance items comprises:
for a second description item of the at least one target description item:
generating input information to a second model based on the second descriptive term and the historical behavior;
and obtaining a second guide item generated based on the input information from the second model as a guide item corresponding to the second description item in the group of guide items.
9. The method of claim 8, wherein the first model is an image generation model and the second model is a language model.
10. The method of claim 6, further comprising:
image resources associated with the current user are provided to the first model for use by the first model in generating the target media content.
11. The method of claim 10, wherein the image resources comprise at least one of:
the image identification of the current user;
an image of at least one virtual character within the target platform, the at least one virtual character determined based on the historical behavior of the current user.
12. The method of claim 1, further comprising:
and enabling the target media content to be shared to a specific user and/or a specific platform based on the sharing request of the current user for the target media content.
13. The method of claim 1, wherein the configuration interface further comprises an input control, and determining at least one target description item based on an interaction in the configuration interface comprises:
the at least one target description item is determined based on input information obtained via the input control.
14. An apparatus for generating media content, comprising:
an interface presentation module configured to present a configuration interface comprising at least one candidate description for generating media content, the at least one candidate description being related to a current user's historical behavior within a target platform over a predetermined period of time;
A description item determination module configured to determine at least one target description item based on the interaction in the configuration interface; and
a content presentation module configured to present target media content generated based on the at least one target description item.
15. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the electronic device to perform the method of any one of claims 1 to 13.
16. A computer readable storage medium having stored thereon a computer program executable by a processor to implement the method of any of claims 1 to 13.
CN202310981654.6A 2023-08-04 2023-08-04 Method, apparatus, device and storage medium for generating media content Pending CN117041646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310981654.6A CN117041646A (en) 2023-08-04 2023-08-04 Method, apparatus, device and storage medium for generating media content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310981654.6A CN117041646A (en) 2023-08-04 2023-08-04 Method, apparatus, device and storage medium for generating media content

Publications (1)

Publication Number Publication Date
CN117041646A true CN117041646A (en) 2023-11-10

Family

ID=88632957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310981654.6A Pending CN117041646A (en) 2023-08-04 2023-08-04 Method, apparatus, device and storage medium for generating media content

Country Status (1)

Country Link
CN (1) CN117041646A (en)

Similar Documents

Publication Publication Date Title
US11290550B2 (en) Method and device for allocating augmented reality-based virtual objects
US11128582B2 (en) Emoji recommendation method and apparatus
CN108959558B (en) Information pushing method and device, computer equipment and storage medium
CN110917630B (en) Enhanced item discovery and delivery for electronic video game systems
KR102590492B1 (en) Method, system, and computer program for providing ruputation badge for video chat
JP7273100B2 (en) Generation of text tags from game communication transcripts
US11491406B2 (en) Game drawer
CN111177499A (en) Label adding method and device and computer readable storage medium
CN111752426A (en) Chat thread display method, recording medium, and computer device
CN112269917A (en) Media resource display method, device, equipment, system and storage medium
CN112843723B (en) Interaction method, interaction device, electronic equipment and storage medium
CN116917012A (en) Automatic detection of prohibited game content
US11826663B2 (en) Automatically generated search suggestions
CN117041646A (en) Method, apparatus, device and storage medium for generating media content
CN113515336B (en) Live room joining method, creation method, device, equipment and storage medium
US20220067052A1 (en) Providing dynamically customized rankings of game items
CN114531406A (en) Interface display method and device and storage medium
KR20220079653A (en) Prominent display of targeted games in search results
CN117349510A (en) Method, apparatus, device and storage medium for providing media content
US11561989B2 (en) Matching system and display method using real-time event processing
US20230034924A1 (en) Object account grouping method and apparatus
CN117891348A (en) Interactive control method, device, equipment and storage medium
WO2024055865A1 (en) Task platform display method and apparatus, device, and computer program product
CN117472248A (en) Method, apparatus, device and storage medium for providing media content
CN116617661A (en) Virtual prop display method and device, computer storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination