CN117785002A - Method, apparatus, device and storage medium for image generation - Google Patents

Method, apparatus, device and storage medium for image generation Download PDF

Info

Publication number
CN117785002A
CN117785002A CN202311685883.XA CN202311685883A CN117785002A CN 117785002 A CN117785002 A CN 117785002A CN 202311685883 A CN202311685883 A CN 202311685883A CN 117785002 A CN117785002 A CN 117785002A
Authority
CN
China
Prior art keywords
image
avatar
interface
terminal device
portal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311685883.XA
Other languages
Chinese (zh)
Inventor
陈嘉俊
迈克尔·布津诺威
刘旭
郦橙
吕国伟
王晓露
赵双琳
赵一新
刘晶
桑燊
黄友才
张陈靓
黎振邦
J·古齐
陈虒鼎
杨天宇
李爽
阿什温·巴德里·斯里曼纳拉亚南
杰弗里·J-J·陈
阿吉特·拉蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Lemon Inc Cayman Island
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd, Lemon Inc Cayman Island filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311685883.XA priority Critical patent/CN117785002A/en
Publication of CN117785002A publication Critical patent/CN117785002A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

According to embodiments of the present disclosure, methods, apparatuses, devices, and storage medium for image generation are provided. The method comprises the following steps: presenting a first image of a first avatar having a first element and an element usage portal of the first element, the first avatar including media data representing an appearance feature of a first object; and based on the trigger for the element use portal, rendering a second image of a second avatar having the first element, the second avatar including media data representing an appearance feature of a second object. In this way, the interactivity between different users is enhanced, thereby enhancing the social experience and interest.

Description

Method, apparatus, device and storage medium for image generation
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, more particularly, relate to methods, apparatuses, devices, and computer-readable storage media for image generation.
Background
With the development of computer technology, content sharing type applications are designed to provide various services to users. In such applications, interactions between different users may be made by means of content. For example, a user may browse, comment on, forward various types of content published by other users in an application, including various media content such as video, images, image sets, audio, and so forth. How to increase interactivity between users is an important issue of concern in applications such as these.
Disclosure of Invention
In a first aspect of the present disclosure, a method for image generation is provided. The method comprises the following steps: presenting a first image of a first avatar having a first element and an element usage portal of the first element, the first avatar including media data representing an appearance feature of a first object; and based on the trigger for the element use portal, rendering a second image of a second avatar having the first element, the second avatar including media data representing an appearance feature of a second object.
In a second aspect of the present disclosure, an apparatus for image generation is provided. The device comprises: a first rendering module configured to render a first image of a first avatar and an element usage portal of the first element, the first avatar including media data representing an appearance feature of a first object; and a second rendering module configured to render a second image of a second avatar having the first element based on the trigger of the element use portal, the second avatar including media data representing an appearance feature of the second object.
In a third aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to implement the method of the first aspect.
It should be understood that what is described in this section of the disclosure is not intended to limit key features or essential features of the embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
fig. 2A-2Z illustrate schematic diagrams of example interfaces for avatar creation according to some embodiments of the present disclosure;
3A-3I illustrate schematic diagrams of example interfaces for reuse styles, according to some embodiments of the present disclosure;
FIGS. 4A-4E illustrate schematic diagrams of example interfaces for style adjustment, according to some embodiments of the present disclosure;
5A-5J illustrate schematic diagrams of example interfaces for providing style adjustment information, according to some embodiments of the present disclosure;
FIG. 6 illustrates a flow chart of a process for image generation according to some embodiments of the present disclosure;
FIG. 7 illustrates a flow chart of an apparatus for image generation according to some embodiments of the present disclosure; and
fig. 8 illustrates a block diagram of an electronic device in which one or more embodiments of the disclosure may be implemented.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
In this context, unless explicitly stated otherwise, performing a step "in response to a" does not mean that the step is performed immediately after "a", but may include one or more intermediate steps.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to relevant legal regulations.
For example, in response to receiving an active request from a user, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require obtaining and using personal information to the user, so that the user may autonomously select whether to provide personal information to software or hardware such as an electronic device, an application, a server, or a storage medium that performs the operation of the technical solution of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the prompt information may be sent to the user, for example, in a popup window, where the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
As mentioned briefly above, users may browse, comment on, forward media content published by other users in an application to interact with other users. As machine learning technology has evolved, digitizing tools based on machine learning models have begun to be used to create avatars for users. The user may also generate an image using the avatar. However, in such a scheme, images of themselves are generated independently between different users without interaction.
To this end, according to an embodiment of the present disclosure, an improvement of image generation is proposed. According to an aspect of an embodiment of the present disclosure, a first image of a first avatar including media data representing an appearance feature of a first object and an element usage portal of the first element are presented. Based on the triggering of the element use portal, a second image of a second avatar is presented with the first element, the second avatar including media data representing an appearance feature of a second object.
According to an embodiment of the present disclosure, if one user views an image having a certain element generated by another user using an avatar, the user may also generate an image having the same or similar element using an avatar associated with himself. In this way, the interactivity between different users is enhanced, thereby enhancing the social experience and interest.
Example embodiments of the present disclosure are described below with reference to the accompanying drawings.
Example Environment
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. In environment 100, one or more users 110-1, 110-2, 110-3, … …, 110-N may publish and/or browse media data (which may also be referred to as multimedia content, multimedia data, media content, etc.) in a target platform through respective associated terminal devices 120-1, 120-2, 120-3, … …, 120-N.
For ease of discussion, the users 110-1, 110-2, 110-3, … …, 110-N may be referred to collectively or individually as the users 110, and the terminal devices 120-1, 120-2, 120-3, … …, 120-N may be referred to collectively or individually as the terminal devices 120. In a distribution scenario of media data, user 110 may also be referred to as a distributor of media data. In a browsing scenario of media data, user 110 may also be referred to as a viewer, browser, or viewer of media data.
The terminal device 120 may have installed therein a platform supporting the distribution of media data and/or supporting the playing of media data. Such a platform may be, for example, an application or a website. The user 110 may operate the terminal device 120 to access a corresponding application or website. In some embodiments, the platform for publishing the media data and the platform for playing the media data may be the same or different platforms. Illustratively, the user 110-1 performs editing and publishing for media data on a first platform, the publishing operation indicating that the media data is published to a second platform. The user 110-2 may browse to the media data on the second platform.
In some embodiments, an application 125 supporting the publishing of media data may be installed in the terminal device 120 (i.e., an application 125-1 installed in the terminal device 120-1, an application 125-2 installed in the terminal device 120-2, applications 125-3, … … installed in the terminal device 120-3, and an application 125-N installed in the terminal device 120-N). It should be noted that the applications 125 installed in different terminal devices 120 may be identical applications or different applications (e.g., different versions). The application 125 may be any suitable application having media data posting functionality, which may be, for example, a social class application, a content sharing class application, an office support class application, and so forth.
In the environment 100 of fig. 1, the terminal device 120 may present a user interface of the application 125 if the application 125 is in an active state. This user interface may include various types of interfaces that the application 125 can provide, such as a user interface that supports message interactions, a user interface that supports content browsing, a messaging interface, and so forth. The application 125 may provide different content to the user 110 via different user interfaces. The application 125 may also provide the user 110 with a selection and switching of the manner of presentation of the associated content via an appropriate manner, such as clicking or selecting any appropriate interface element in a user interface.
In some embodiments, the different terminal devices 120 may also communicate with the server 130 via the network 132 to enable provision of the message interaction services. The server 130 may provide management, configuration, and maintenance functions for the applications 125.
The terminal device 120 may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 120 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.). Server 130 may be various types of computing systems/servers capable of providing computing power, including, but not limited to, mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
It should be understood that the structure and function of the various components in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure. Various example implementations of the present disclosure are described in detail below.
Example interface
For ease of understanding, some example embodiments of the present disclosure are described below in connection with the example interfaces of fig. 2A-5J. Fig. 2A-5J illustrate schematic diagrams of example interfaces according to some embodiments of the present disclosure. It should be understood that the interfaces shown in the figures are merely examples, and that various interface designs may actually exist. Individual interface elements in the interface may have different arrangements and different visual representations, one or more of which may be omitted or replaced, and one or more other interface elements may also be present. Embodiments of the disclosure are not limited in this respect.
The example interfaces shown in fig. 2A-5J may be presented at the terminal device 120. For ease of discussion, the example interfaces/example pages shown in fig. 2A-5J will be described with reference to environment 100 of fig. 1. It should be noted that, in this document, the operations performed by the terminal device 120 may be specifically performed by a locally installed related application. In some embodiments, certain operations may need to be accomplished with the support of a server (e.g., server 130) or other device. For ease of understanding, the following is an exemplary description taking an example in which the terminal device 120 itself performs an operation.
In some embodiments, the terminal device 120 may acquire an avatar corresponding to an object based on an image including an outline of the object input by the user 110. The avatar may include media data representing the appearance characteristics of the object. The media data may include any suitable form of data, such as image data, video data, three-dimensional model data, and the like. In some embodiments, the topographical features may include facial features. In this way, virtual objects corresponding to different objects are distinguishable.
That is, the user 110 may create an avatar associated therewith. The objects herein may include any suitable objects, such as humans, animals, cartoon characters, etc., and the present disclosure is not limited to specific objects. In particular, in some embodiments, the object may be the user 110 itself. It will be appreciated that the avatar, image, etc. herein may be generated locally by the terminal device 120 or may be generated at the cloud by means of the server 130. For convenience of description, an exemplary description will be made below taking an image including a face as an example of the user 110 inputting.
In some embodiments, the terminal device 120 may acquire an image having a specific element, also referred to as an element-specific image, of the created avatar according to the operation of the user 110. As used herein, the term "element" or "image element" may refer to any suitable aspect or attribute of an image. For example, an element may include an image style (also simply referred to as style), an image background or an image foreground, and so on.
In some embodiments, the element may be a style. The terminal device 120 may acquire an image of a certain style, also called a stylized image, of the created avatar. That is, a stylized image may be generated and presented for user 110. In embodiments of the present disclosure, the style or image style may include any type of style, such as, but not limited to, cartoon, watercolor, crayon, sketch, cartoon, and the like.
Some example embodiments of the disclosed embodiments will be described below primarily in terms of styles as examples of elements. However, it should be understood that the embodiments described with reference to style may be applied to other types of elements.
In some embodiments, creation of the avatar and generation of the element-specific image may both be accomplished by the terminal device 120 and/or the server 130 via a machine learning model. The machine learning model herein may be any suitable model, which is not limited by this disclosure. For example, in the case of generating an image, the terminal device 120 may generate a hint word based on configuration information of a style to be used and provide the hint word to a model (i.e., a target model) for generating the image. The terminal device 120 may acquire an output of the object model and acquire an image based on the output.
Example interface to create an avatar
Example interactions to create an avatar are described below. In some embodiments, the terminal device 120 may provide an entry associated with the avatar at a user information interface of the user 110. In the case that the user 110 does not create an avatar, the portal may be a creation portal of the avatar. Fig. 2A-2C illustrate schematic diagrams of examples 200A-200C of a user information interface, according to some embodiments of the present disclosure. As shown in fig. 2A, terminal device 120 may provide a user information interface as shown in example 200A. In the case where the user 110 (e.g., "user 123" in the figure) does not create an avatar, the terminal apparatus 120 may provide the creation portal 201 of the avatar in example 200A. The terminal device 120 may display an example 200B as shown in fig. 2B in response to detecting a trigger operation for creating the portal 201. The triggering operation includes, but is not limited to, any suitable operation including a click operation, a double click operation, a long press operation, a slide operation, a drag operation, and the like. The present disclosure is not limited to a specific triggering operation.
In example 200B, the terminal device 120 may display a hint panel 210 for hinting at creating an avatar. The prompt panel 210 may display prompt information indicating how to create an avatar (e.g., text "upload 5 pictures, create digital version of oneself | have an avatar, you can quickly create various digital styles of pictures"). Example effects of avatars or different image styles may also be displayed in the hint panel 210 (e.g., example avatar images may be displayed). A create control 212 and a cancel control 211 may also be included in the hint panel 210. The terminal device 120 may determine to cancel creation of the avatar in response to receiving a trigger operation of the cancel control 211, returning to the display example 200A. Alternatively or additionally, in some embodiments, the terminal device 120 may determine to cancel creation of the avatar in response to receiving a trigger operation to an area outside of the hint panel 210 in example 200B, returning to display example 200A.
Note that with respect to the prompt logic for prompting the creation of the avatar 210, in some embodiments, the prompt panel 210 is displayed by default each time the terminal device 120 detects a trigger operation for creating the portal 201. In some embodiments, the terminal device 120 may also display the hint panel 210 only if a trigger operation for the creation portal 201 is detected for the first time or the first few times (may be a predetermined number of times) the trigger operation for the creation portal 201 is detected.
In case that a trigger operation of the creation control 211 is detected, the terminal device 120 may determine that a creation request or indication of the avatar is received and display an image acquisition interface. The image acquisition interface may include an image capture interface and/or an image selection interface. The terminal device 120 may capture an image with a camera in an image capture interface. The terminal device 120 may select images stored locally at the terminal device 120 in an image selection interface, where the locally stored images include images captured by the terminal device 120 previously by a camera, as well as images acquired by the terminal device 120 from the cloud, from other devices, and so forth.
In some embodiments, if a creation request or indication of an avatar is detected, the terminal device 120 may provide a tab including at least a first option corresponding to the image photographing interface and a second option corresponding to the image selecting interface. The terminal device 120 may determine that the access right to the image photographing interface is acquired in response to detecting the selection operation of the first option, and display the image photographing interface. If a selection operation of the second option is detected, the terminal device 120 may determine that access rights to the image selection interface are acquired and display the image selection interface. The selection operation here may include, for example, any appropriate operation such as a click operation, a double click operation, a long press operation, a slide operation, and the like.
Illustratively, as shown in fig. 2B and 2C, the terminal device 120 may display the example 200C, for example, in response to detecting a trigger operation to the creation control 211. An option card 220 is included in example 200C. The terminal device 220 may display an option 221 corresponding to the image photographing interface and an option 222 corresponding to the image selecting interface in the option card 220. In response to detecting the selection operation for the option 221, the terminal device 120 may display an image capturing interface. In response to detecting the selection operation for option 222, terminal device 120 may display an image selection interface. The terminal device 220 may also display an option 223 in the tab 220, for example. The terminal device 110 may determine to cancel the display of the image capturing interface or the image selecting interface, for example, in response to detecting the selection operation for the option 223, and return to the display example 200B.
In some embodiments, there may be a predetermined default interface. In response to detecting the creation request or indication of the avatar, the terminal apparatus 120 may display the predetermined default interface. In response to detecting a trigger operation to any appropriate interface element in the default interface, the terminal device 120 may display other interfaces. The default interface may be, for example, an image selection interface. If a trigger operation is detected for any appropriate interface element (e.g., interface switch control) in the image selection interface, the terminal device 120 may switch to displaying the image capture interface.
In some embodiments, if a selection operation for option 222 in example 200C is detected, terminal device 110 may display an image selection interface. Alternatively or additionally, in some embodiments, the terminal device 120 may also display a default image selection interface in response to detecting a creation request or indication of an avatar (e.g., a trigger operation for creation portal 201 in example 200A, a trigger operation for creation control 212 in example 200B, etc.). The terminal device 120 may display a hint panel for providing hint information in the image selection interface.
As shown in fig. 2D, the terminal device 120 may display, for example, an example 200D including a hint panel 230. Example 200D may, for example, select an interface for an image that includes a cueing panel. Examples of suitable target images (e.g., images that satisfy conditions related to the face region) and examples of unsuitable target images (e.g., images that do not satisfy conditions related to the face region) may also be displayed in the cueing panel 230, for example. Also included in the hint panel 230 is an operational control 231. The terminal device 120 may cancel the display of the reminder panel 230 if a trigger operation is detected for the operation control 231 or for any region other than the reminder panel 230 in example 200D. After the display of the reminder panel 230 is canceled, the terminal device 120 may display, for example, an example 200E as shown in fig. 2E. Example 200E may select an interface for an image that does not include a hint panel, for example. Multiple images with access rights may be displayed in example 200E.
Similarly, in some embodiments, each time the terminal device 120 displays an image selection interface, the image selection interface including the cueing panel 230 is displayed by default. In some embodiments, the terminal device 120 may display the image selection interface including the cueing panel 230 only in the case where the image selection interface is displayed for the first time or the first few times (may be a predetermined number of times) and then display the image selection interface again without including the cueing panel 230.
As shown in fig. 2E, example 200E may also include region 233 displaying the selected image. In the case where the user does not select an image, the terminal device 120 may not display any image in the area 233. If a selection operation (e.g., a selection operation for a captured image) is received for any one of the images displayed in the image selection interface, the terminal device 120 may determine that the image is selected. In this case, the terminal device 120 may display the selected image in the area 233.
In some embodiments, the terminal device 120 may determine whether the selected image satisfies a condition related to the face region (e.g., whether it is an image including the face of the subject). The terminal device 120 may determine whether the selected image satisfies the condition related to the face region, for example, through a series of face detection models. The terminal device 120 may display the selected image in the region 233, for example, in a case where it is determined that the selected image satisfies the condition related to the face region. The terminal device 120 may, for example, display a hint information that hint that the image does not satisfy the condition related to the face area, that the image cannot be selected, in a case where it is determined that the selected image does not satisfy the condition related to the face area.
In some embodiments, the terminal device 120 may need to acquire a plurality of images in order to secure the effect of the created avatar. The number of the plurality of images may be any suitable number that satisfies the avatar creation requirement. The number of images and/or the number range of images satisfying the avatar creation requirement may be preset. For example, if the user previously sets the number of images satisfying the avatar creation requirement to 5, the user needs to select 5 images so that the terminal device 120 can acquire the 5 images. For another example, if the user previously sets the range of the number of images satisfying the avatar creation requirement to 5-20, the user needs to select any one number of images (e.g., 10) from 5 to 20 so that the terminal device 120 can acquire the 5 images.
Region 233 may also include operation controls 234. The terminal device 120 can obtain the selected set of images in response to detecting a trigger operation for the operation control 234. The terminal device 120 may, for example, display an example 200F of the interface shown in fig. 2F during acquisition of the selected set of images. Terminal device 120 may display interface element 240 in example 200F indicating that terminal device 120 is acquiring the selected set of images.
In some embodiments, the terminal device 120 may display an avatar generation interface in response to acquiring the selected set of images. In some embodiments, to make a secondary determination of the selected set of images and whether to generate the avatar, the terminal apparatus 120 may also display an example 200G as shown in fig. 2G in response to acquiring the selected set of images. The terminal device 120 may display a card 250 for prompting confirmation of whether to generate an avatar in example 200G. Included in card 250 are return controls and generate controls. The terminal device 120 may cancel the display of the card 250 in response to detecting a trigger operation to a return control in the card 250. The terminal device 120 may start generating the avatar in response to detecting a trigger operation to a generation control in the card 250.
Referring back to fig. 2E, an operation control 232 may also be included in the image selection interface shown in example 200E. The terminal device 120 may switch to displaying the image capturing interface in response to detecting a trigger operation for the operation control 232. In some embodiments, the terminal device 120 may also determine that rights to allow access to the camera are acquired and display an image capture interface in response to detecting a selection operation for the selection 221 in example 200C. In some embodiments, the terminal device 120 may also display a default image capture interface in response to detecting a creation request or indication of an avatar (e.g., a trigger operation for creation portal 201 in example 200A, a trigger operation for creation control 212 in example 200B, etc.).
The terminal device 120 may display, for example, an example 200H as shown in fig. 2H. The example 200H may be, for example, an example of an image capture interface. The terminal device 120 may display the image captured with the camera in the region 202 of the example 200H. In some embodiments, the terminal device 120 may also display a frame 204 in the area 202 and a prompt 203. The terminal device 120 may, for example, determine whether the captured image meets the facial region-related conditions. The condition related to the face region may include, for example, at least a condition that the face region of the target object is located within the frame 204 in the captured image. As mentioned previously, the terminal device 120 may, for example, utilize a series of face detection models to determine whether the captured image meets the face region related conditions.
In some embodiments, if it is determined that the image captured in example 200H does not satisfy the condition related to the facial region, terminal device 120 may display corresponding hint information in region 203. The prompt information may be a fixed prompt information that is set in advance. For example, a message such as "please adjust the face position" may be provided. The hint information may also be determined based on one or more conditions of the at least one condition that are not satisfied. For example, if it is determined that the captured image does not satisfy the condition that the face region of the target object is located within the frame 204 in the captured image, the terminal device 120 may display a prompt such as "put your face in a predetermined region" in the region 203. As shown in fig. 2H, in response to the facial region being located in an area outside of the bezel 204 (including both of being located entirely outside of the bezel 204 or being located partially outside of the bezel 204), the terminal device 120 may display a prompt message "put your face in a predetermined area" in the area 203. The region 203 may be located at any suitable location of the example 200H, which may be displayed at the uppermost layer of all layers, for example. For example, the region 203 may be displayed superimposed over the region 202, or the like.
In some embodiments, if it is determined that the captured image satisfies the condition related to the facial region, the terminal device 120 may display a hint information in the region 203 that the image may be captured. As shown in fig. 2I, if it is determined that the captured image satisfies the condition related to the facial region, the terminal device 120 may display a prompt message "no problem" in the region 203 of example 200I to prompt the user that the current image satisfies the condition related to the facial region, and may capture the image. The terminal device 120 may determine that an image capture operation is received in response to detecting a trigger operation to the capture control 206, capturing an image currently displayed in the region 202.
After the terminal device 120 acquires the captured image, an example 200J as shown in fig. 2J may be displayed. The example 200J may display, for example, the captured image and the logo 207. The flag 207 is used to indicate that the currently captured image satisfies the condition related to the face area and that the image was successfully acquired by the terminal device 120.
As shown in fig. 2H to 2J, the image capturing interfaces shown in examples 200H, 200I, 200J may also display an operation control 205. The terminal device 120 may switch to displaying the image selection interface in response to receiving a trigger operation for the operation control 205. The terminal device 120 may display, for example, an example 200K shown in fig. 2K, the example 200K showing one example of an image selection interface. The terminal device 120 may display an image captured through the image capturing interface in the area 233 of example 200K.
It is noted that the terminal device 120 may acquire the avatar based on only the plurality of images captured via the image capturing interface, may acquire the avatar based on only the plurality of images selected via the image selecting interface, and may acquire the avatar based on a combination of at least one image captured via the image capturing interface and at least one image selected via the image selecting interface. The exact acquisition of the avatar based on which images may depend on the operation of the user 110. Thus, the user 110 can flexibly select an already photographed image or photograph a new image.
In some embodiments, if the terminal device 120 acquires the avatar based only on the plurality of images selected via the image selection interface, the terminal device 120 may determine the selected plurality of images and acquire the avatar based on the plurality of images in response to receiving a trigger operation of the operation control 234 shown in examples 200E to 200G.
In some embodiments, if the terminal device 120 acquires the avatar based on a combination of at least one image captured via the image capturing interface and at least one image selected via the image selecting interface, the terminal device 120 may acquire the selected plurality of images in response to receiving a selection operation for the plurality of images in the example 200K. The terminal device 120 may acquire an avatar based on the image captured through the image photographing interface displayed in the region 233 and the selected plurality of images.
In some embodiments, if the terminal device 120 acquires an avatar based only on a plurality of images captured via an image photographing interface, the terminal device 120 may display examples 200L to 200O as shown in fig. 2L to 2O. Examples 200L to 200O illustrate a plurality of examples of the image capturing interface. It can be appreciated that in order to ensure the effect of an avatar generated only based on a plurality of images captured via an image photographing interface, the terminal device 120 may need to acquire images including faces of different angles. Examples 200L through 200O may display region 208. The area 208 may display a prompt indicating that a plurality of images including faces of different angles are to be acquired (it will be appreciated that although 3 are shown, the plurality of images herein may be any number). The terminal device 120 may also display a hint information in the area 203 indicating that faces of different angles are acquired. For example, if an image including the right side of the face is acquired, the terminal device 120 may display a prompt message "turn your face to the right" in the area 203. The terminal device 120 may also display the acquired image at a corresponding location in the area 208. The terminal device 120 may in turn acquire an avatar based on the plurality of images displayed in the area 208.
If a trigger operation for the operation control 234 is received, the terminal device 120 may determine that an operation indicating creation of an avatar is received and start acquisition of the avatar. The terminal device 120 may create an avatar by means of a machine learning model, which may be deployed locally at the terminal device 120 or at the server 130. For example, a plurality of images specified by the user 110 may be provided to a machine learning model for generating an avatar.
As described above, in some embodiments of the present disclosure, the user 110 may not be required to distinguish between the images selected, e.g., without specifying which of the images are frontal images of the object. This facilitates the user to specify the image more conveniently.
Note that, in the case where the acquired plurality of images each include the face of the same object (e.g., the first object) and each include only the face of the object, the terminal device 120 may generate an avatar (e.g., the first avatar) corresponding to the object based on the acquired plurality of images. In the case where the acquired plurality of images include faces of a plurality of objects, the terminal device 120 may determine priorities to which the plurality of objects respectively correspond. The terminal device 120 may determine the priority corresponding to each object based on the number of faces of each object, for example. For example, if 10 images are acquired in total, 6 of the 10 images including the face of the first object, 5 images including the face of the second object, and 3 images including the face of the third object (it will be understood that faces of a plurality of objects may be included in the same image, and that the first object, the second object, and the third object are three different objects), the terminal device 120 may determine that the priority of the first object > the priority of the second object > the priority of the third object. The terminal device 120 may in turn generate an avatar corresponding to the first object.
In generating the avatar, the terminal apparatus 120 may display an example 200P shown in fig. 2P. Example 200P illustrates an example of a creation progress interface of an avatar. The example 200P includes prompt information indicating progress of creation of the avatar. For example, the example 200P includes the hint information "estimated time: XX hours XX minutes). An operational control 251 and an operational control 252 may also be included in example 200P. The terminal device 120 may determine to cancel the generation of the avatar if a trigger operation for the operation control 252 is received. The terminal device 120 may, for example, switch back to displaying the example 200A as shown in fig. 2A. In order to avoid false touches, the terminal device 120 may also display a prompt confirming whether or not to cancel the creation of the avatar card, for example. As shown in fig. 2Q, in example 200Q, terminal device 120 may display card 260. The terminal device 120 may determine to continue generating the avatar upon detecting a return control for the card 260. The terminal device 120 may cancel the generation of the avatar in case of detecting a cancel control for the card 160. The terminal device 120 may switch back to displaying the example 200A as shown in fig. 2A, or to displaying the example 200R as shown in fig. 2R, for example. An operation control 261 is included in example 200R. The terminal device 120 may repeat the above-described operation of acquiring an image and generating an avatar, for example, in response to detecting a trigger operation for the operation control 261.
Referring back to fig. 2P, in the event that a triggering operation for the operation control 251 is detected, the terminal device 120 may switch to other interfaces that can be provided by the display application 120. Illustratively, in response to detecting a triggering operation for the operation control 251, the terminal device 120 may switch to display the content recommendation interface (e.g., diagram 200S) shown in fig. 2S. It will be appreciated that terminal device 120 may also display any suitable interface in response to detecting a triggering operation for operation control 251, which is not limited by this disclosure.
An inlet 271 may be included in example 200S, for example. The terminal device 120 may display a prompt message (e.g., text "generate your avatar …") indicating that an avatar is being generated at the portal 271. In some embodiments, the terminal device 120 may also display creation progress information of an avatar (e.g., a first avatar) at the portal 271. For example, terminal device 120 may display the text "created" at portal 271: XX% "to display creation progress information of the first avatar. If a trigger operation for portal 271 is detected, terminal device 120 can switch back to display example 200P.
After the creation of the avatar is completed, the terminal apparatus 120 may also provide a push notification to indicate that the creation of the avatar is completed. The push notification may include a first viewing portal for viewing the created avatar. In some embodiments, the terminal device 120 may also provide a different style of first viewing portal. Illustratively, if the user interface currently displayed by the terminal device 120 is an interface provided by the application 120, the terminal device 120 may provide a first viewing portal of a first style, and if the user interface currently displayed by the terminal device 120 is not an interface that the application 120 may provide, the terminal device 120 may provide a first viewing portal of a second style.
As shown in fig. 2T and 2U, the example 200T may be, for example, a content recommendation interface provided by the application 120, and the example 200U may be, for example, an interface that is not provided by the application 120 (i.e., other than the interface that may be provided by the application 120). The terminal device 120 may provide a first viewing portal 272 in example 200T and a first viewing portal 273 in example 800B, for example. The terminal device 120 may then display the created avatar in case of detecting a trigger operation to the first viewing portal 272 or the first viewing portal 273.
In some embodiments, the terminal device 120 may further provide a second viewing portal for the avatar in the user interface to view the created avatar after the avatar creation is completed. The user interface comprising the second viewing portal may be, for example, a message interface of the user. As shown in fig. 2V and 2W, examples 200V and 200W may be, for example, a message interface for a user. The terminal device 120 may display a notification message view entry 274 in the message interface shown in example 200V, for example, in response to the avatar having been generated. In some embodiments, the notification message view entry 274 may also be fixedly displayed in example 200V, with the terminal device 120 displaying a message prompt (e.g., displaying the number 1) at the notification message view entry 274 in response to the avatar having been generated. The terminal device 120 displays the message interface shown in example 200W in response to detecting a trigger operation for the notification message view entry 274. A second view entry 275 is included in example 200W. In response to detecting the trigger operation for the second view portal 275, the terminal device 120 displays the created avatar.
In some embodiments, the terminal device 120 may also display avatar information interfaces (i.e., examples 200X to 200Z) of the avatars shown in fig. 2X to 2Z. The terminal device 120 may display the avatar information interface, for example, in response to the avatar having been generated or in response to a trigger for the avatar's avatar information. The avatar information interface may include at least one of a view portal (e.g., portal 276-1, portal 276-2, portal 276-3, portal 276-4, etc., the present disclosure is not limited to the number of at least one style) corresponding to at least one style and a start interface portal (e.g., portal 277) for stylized image generation. If the terminal device 120 detects a trigger operation for a viewing portal (e.g., portal 276-1) corresponding to a certain style, the terminal device 120 may display a style information interface for the style (as will be described below). If the terminal device 120 detects a trigger operation for a startup interface portal (e.g., portal 277), the terminal device 120 may switch to displaying a stylized image-generated startup interface, as will be described below.
The terminal device 120 may also display the portal 278 in example 200X to provide further operations for the avatar. The terminal device 120 may display an example 200Y of an interface such as that shown in fig. 2Y in response to detecting a trigger operation for the portal 278. Example 200Y includes a faceplate 280. Control 281, control 282, and control 283 may be included in panel 280, for example. The terminal device 120 may display a user interface including more operation controls for the avatar in response to receiving a selection operation for the control 281. The terminal device 120 may cancel the display panel 280 in response to receiving a selection operation for the control 283. The terminal device 120 may switch to, for example, the interface shown in display example 200X. The terminal device 120 may determine that a delete operation for the avatar is received in response to receiving the selection operation for the control 282. To avoid false touches, the terminal device 120 may switch to the interface shown in display example 200Z. In example 200Z, card 290 is shown. Included in card 290 is a prompt to determine whether to delete the avatar. Terminal device 120 may determine to delete the avatar in response to receiving a trigger operation for a delete control in card 290. Terminal device 120 may determine to cancel the deletion of the avatar in response to receiving a trigger operation for a return control in card 290. The terminal device 120 may return to displaying example 200X or example 200Y, for example.
After creating the avatar, an image having a certain element, also referred to as an element-specific image, may be generated using the avatar associated with the user. For example, an avatar associated with a user may be utilized to generate an image having a certain style, also referred to as a stylized image. In some embodiments, the image of the avatar having the predetermined element may be generated according to a style selected from the plurality of elements according to a user selection. In some embodiments, an image of the avatar with the custom element may be generated according to custom information of the user. In some embodiments, user adjustments to predetermined elements or custom elements may also be supported. The predetermined element or the custom element may be regarded as a parent element, and an element obtained by adjusting a parent style may be regarded as a child element of the parent element.
Example interface to reuse elements
In some embodiments, if a user publishes an image (which may be in the form of a picture or video) with an element, other users may be supported to use this element to generate images of avatars associated with themselves. Such an example embodiment is described below. For descriptive purposes only, it is assumed that the first user is associated with a first avatar of the first object, e.g., the first object may be the first user; a second user, different from the first user, is associated with a second avatar of a second object, e.g., the second object may be the second user. The first user publishes a fourth image of the first avatar with the third element.
Example embodiments of reusing elements will be described with style as an example, but it should be understood that the embodiments described with reference to style apply to other elements. In some embodiments, if a user publishes an image (which may be in the form of a picture or video) having a certain style, other users may be supported to use this style to generate images of avatars associated with themselves. Such an example embodiment is described below. For descriptive purposes only, it is assumed that the first user is associated with a first avatar of the first object, e.g., the first object may be the first user; a second user, different from the first user, is associated with a second avatar of a second object, e.g., the second object may be the second user. The first user publishes a first image of a first avatar having a first style.
The terminal device 120 of the second user may present a first image of the first avatar having a first style and a style usage portal of the first style. In some embodiments, the first image and the style usage portal may be presented in a content recommendation interface. For example, the first image and style usage portal may be presented to the second user as the second user browses the information stream or browses the published content of the user of interest. Then, based on the trigger for the style use portal, the terminal device 120 may present a second image of the second avatar having the first style. That is, if the user browses to an image of a favorite style of himself, a new image can be generated with such a style.
In some embodiments, the first style may include a parent style and a child style obtained by partially adjusting the parent style. For example, the parent style may be a predetermined style in the application 125. As another example, the parent style may be a custom style, as described above. The sub-styles may be generated by the style adjustment procedure described above.
In some embodiments, if a style usage portal is triggered, terminal device 120 may present a style information interface of a first style, such as a style details page. The style information interface may include a first image generation portal for a first style, or a respective viewing portal for at least one image having the first style. Based on the operation associated with the style information interface, the terminal device 110 may acquire a second image of the second avatar having the first style. That is, based on the operation associated with the style information interface, the second user may trigger creation of an image having the same or similar style with the avatar associated with himself.
Some example interfaces are described with reference to fig. 3A-3I. An example 300A of a content recommendation interface is shown in fig. 3A. The interface includes an image 301 of a first avatar having a style two (which is an example of a first style) and a style usage entry 302 for style two. If the style usage portal 302 is triggered, the terminal device 120 may present the example 300B of FIG. 3B, which is an example of a style information interface. Example 300B illustrates a style information interface for style two. The style information interface includes basic information of the style, such as the name displayed in region 306, the number of work published, and the like.
The style information interface may include a first image generation portal, such as first image generation portal 305, for a first style. In some embodiments, if the first image generation portal 305 is triggered, the terminal device 120 may acquire the second image. That is, the terminal device 120 may generate an image having the first style using the second avatar.
The style information interface may include a view portal, such as view portal 303 and view portal 304 shown in fig. 3B, for each of at least one image having a first style. If a view portal is triggered, a view interface for the corresponding image may be presented, which may include another image generation portal for the first style, also referred to as a second image generation portal. For example, if view portal 303 is triggered, terminal device 110 may present the image view interface shown in example 300C of fig. 3C. The image viewing interface includes a corresponding image, a style usage portal 307, and a second image generation portal 308. If the second image generation portal 308 is triggered, the terminal device 120 may acquire the second image.
It should be noted that, in some embodiments, if the first style is a parent style, the image corresponding to the viewing portal included in the style information interface may be an image having the parent style and/or an image having a child style under the parent style. For example, style two is a parent style. The images in the style information interface shown in example 300B may have style two or a sub-style of style two. For example, the image corresponding to the view portal 303 has the sub-style "style two-user 345". The suffix in the style name "-user 345" may indicate that "user 345" adjusts style two to get the sub-style. In this case, the style uses the name of the style displayed in the entry 307 as the name of the sub-style, as shown in fig. 3C.
In some embodiments, if the first style is a child style, the image corresponding to the view entry included in the style information interface may be an image having the child style and/or an image having a parent style of the child style. Also displayed in area 306 is the name of the sub-style.
The generation of the stylized image continues to be described. If either the first image generation portal 305 or the second image generation portal 308 is triggered, the terminal device 120 may trigger the generation of the second image. If the current second user does not have an associated avatar, i.e., if the second avatar has not been created, the terminal apparatus 120 may present an entry for creating an avatar or an introduction interface with respect to the avatar. For example, the terminal device 120 may present the interface shown in fig. 2A or fig. 2B, thereby directing the second user to create its associated second avatar. In this case, if it takes a long time to create the second avatar, the second image of the second avatar having the first style may be automatically generated after the creation of the second avatar is completed without the second user having to re-operate. After the second image is generated, a notification or prompt may be sent to the second user in a variety of suitable ways to cause the second user to view the second image. Such notifications or cues may include, but are not limited to, notifications in a content recommendation interface, messages in an inbox, system messages in the terminal device 120, and so forth.
In some embodiments, if the current second user has a created avatar, the terminal apparatus 110 may allow the second user to select whether to use the created avatar or to re-create the avatar and use the newly created avatar. In some embodiments, if the current second user has a plurality of created avatars, the terminal device 110 may allow the second user to select an avatar to be used from among the created avatars.
In some embodiments, if the current second user has a second avatar already created, the terminal apparatus 110 may directly trigger generation of the second image using the second avatar. For example, if the first image generation portal 305 is triggered, the terminal device 120 may present an interface, shown in example 300D of fig. 3D, indicating the triggering of the second image generation.
In some embodiments, in generating the second image, the terminal device 120 may present a generation progress interface for indicating the generation progress of the image according to the generation progress of the second image, such as example 300E shown in fig. 3E and example 300F shown in fig. 3F. Generating the progress interface may also include determining controls, such as control 309 in example 300E and control 310 in example 300F. If a trigger for generating a determination control in the progress interface is received, the terminal device 120 can switch to other interfaces which can be provided by the presentation application 120. Illustratively, the terminal device 120 can switch to presenting the content recommendation interface of the application 120 in response to detecting a trigger operation for the control 309 or 310.
After the second image is generated or in response to a viewing operation by the second user, the terminal device 120 may present a content preview interface for the second image, e.g., example 300G of fig. 3G is one example of a content preview interface. The content browsing interface can include a plurality of second images. The second user may further edit the images or may delete one or more of the images via the content browsing interface.
In some embodiments, the terminal device 120 may issue the second image based on the operation of the second user. The content preview interface shown in example 300G also includes an operation control 313 and an operation control 312. The terminal device 120 may trigger the regeneration of the second image, for example, in response to receiving a trigger operation for the operation control 313. The terminal device 120 may, for example, switch to rendering the example 300H shown in fig. 3H in response to receiving a trigger operation for the operation control 312. Example 300H illustrates an example of a content editing interface of the second image.
The content editing interface shown in example 300H includes an option for a post format for the second image. For example, an operational control 314 may be included in the example 300H. The operation control 314 includes an option "picture" and an option "video". The terminal device 120 may determine to publish the second image in a picture format in response to receiving a trigger operation for the option "picture". The terminal device 120 may determine to publish the second image in the video format in response to receiving a trigger operation for the option "video". The content editing interface shown in example 300H may also include an operation control 315. The terminal device 120 may determine that an edit confirmation is received (i.e., editing for the second image is completed) in response to receiving a trigger operation for the operation control 315, and switch to presenting the content distribution interface for the second image. Fig. 3I shows an example of a content distribution interface for a second image (i.e., example 300I). Example 300I includes a release control 317, and the terminal device 120 may determine that a release confirmation was received and release the second image in response to receiving a trigger operation for the release control 317. In some embodiments, the content publishing interface shown in example 300I may also include a style usage portal 316 for a first style (style two in this example). It should be appreciated that if a sub-style is used, the style usage portal 316 presents the name of the sub-style, such as "style two-user 345".
In some embodiments, a second user may be supported to adjust the style and generate an image of the second avatar having the adjusted first style. For example, the terminal device 120 may present a style adjustment interface in response to a style adjustment indication of a second image of a first style of a second avatar. The terminal device 120 may receive the adjustment information for the first style via the style adjustment interface and present a fourth image of the second avatar having the adjusted first style based on the received adjustment information. That is, if the second user is not satisfied with the generated second image or has his own preference, the first style may be adjusted, and an image of the second avatar having the adjusted first style may be generated based on the adjusted first style.
In some embodiments, the content preview interface described above may include a style adjustment portal. For example, the content preview interface shown in example 300G may include portal 311. The terminal device 120 may determine that a style adjustment indication of a second image of a first style of a second avatar is received in response to detecting a trigger operation for a style adjustment entry. The terminal device 120 may present a style adjustment interface in response to the style adjustment indication for the second image.
Note that if the second user makes an adjustment to the first style, i.e. a new sub-style is generated, the style name displayed in the style usage portal after the image is released may include information about the second user. For example, the second user is "user 567" and style two is adjusted. Accordingly, the style name displayed in the style usage entry may be "style two-user 567".
In such an embodiment, existing elements may be used by content published by other users. This is advantageous for enhancing the interactivity between different users.
Example interface to adjust image elements
As mentioned above, in some embodiments, a user may be supported to adjust image elements and generate an image of the avatar with the adjusted elements. The terminal device 120 may present an element adjustment interface in response to an element adjustment request for a second image of a second avatar having a first element. The terminal device 120 may receive adjustment information for the first element via the element adjustment interface and present a fourth image of the second avatar with the adjusted first element based on the received adjustment information. That is, if the user is not satisfied with the generated second image having the first element or has his own preference, the first element may be adjusted, and an image of the second avatar having the adjusted first element may be generated based on the adjusted first element. In some embodiments, if the first element is a predetermined element, the first element may be considered a parent element and the adjusted first element may be considered a child element of the parent element.
Example embodiments of adjusting image elements will be described with style as an example, but it should be understood that the embodiments described with reference to style apply to other elements. In some embodiments, a user may be supported to adjust the style and generate an image of the avatar with the adjusted style. For example, the terminal device 120 may present a style adjustment interface in response to a style adjustment indication of a second image of a first style of a second avatar. The terminal device 120 may receive the adjustment information for the first style via the style adjustment interface and present a fourth image of the second avatar having the adjusted first style based on the received adjustment information. That is, if the user is not satisfied with generating the second image having the first style or has his own preference, the first style may be adjusted, and the image having the adjusted first style of the second avatar may be generated based on the adjusted first style. In some embodiments, if the first style is a predetermined style, the first style may be considered a parent style and the adjusted first style may be considered a child style of the parent style.
In some embodiments, the content preview interface described previously (e.g., example 300G shown in fig. 3G) may include a style adjustment portal (e.g., portal 311). The terminal device 120 may determine that a style adjustment indication of a second image of a first style of a second avatar is received in response to detecting a trigger operation for a style adjustment entry. The terminal device 120 may present a style adjustment interface in response to the style adjustment indication for the second image. The style adjustment interface may include, for example, at least one of a first input control (e.g., text input box, voice input control, etc.) for receiving adjustment information, a first hint information for adjustment information, or a second image. The first cue information indicates an adjustable portion of the first style.
Fig. 4A shows an example of a style adjustment interface (i.e., example 400A). As shown in fig. 4A, example 400A may include a second image 410. The terminal device 120 may adjust the configuration information of the first style against the second image 410 by presenting the second image 410 so that the user may adjust the configuration information of the first style against the second image 410. Thus, the style can be intuitively adjusted by the user. Example 400A may also include a first input control 420, an area 430, and an input panel 440. The user may enter adjustment information for the first style through the first input control 420. Alternatively, at least a portion of the first style's current configuration information may be presented in the first input control 420. The user may modify the configuration information displayed in the first input control 420.
In some embodiments, if a preset operation for the style adjustment interface is received, the terminal device 120 may stop displaying the second image in the style adjustment interface. In other words, the terminal device 120 may not display the second image in the style adjustment interface. In addition, the terminal device 120 may also increase the presentation area of the first input control. As shown in fig. 4A and 4B, terminal device 120 can present the interface shown in example 400B, for example, in response to receiving an upward drag operation for operation control 401 in example 400A. The second image 410 is no longer presented in example 400B and the presentation area of the first input control 420 in example 400B is greater than the presentation area of the first input control 420 in example 400A.
The region 430 may include a cancel control 402 and a regenerate control 403. The terminal device 120 may acquire the adjustment information input by the user via the input panel 440. If a trigger operation is received for cancel control 402, terminal device 120 may determine to cancel the adjustment of the first style of configuration information. The terminal device 120 may, for example, return to rendering the content preview interface. The terminal device 120 may also determine the adjusted first style based on the configuration information of the adjusted first style and acquire a fourth image of the second avatar having the adjusted first style in response to detecting the trigger operation for the regeneration control 403. In some embodiments, the terminal device 120 may utilize the object model to generate the fourth image. For example, a hint word may be generated based on the adjusted configuration information of the first style and provided to the target model. As another example, the adjusted configuration information of the first style and the second avatar may be provided to the target model as image generation conditions. Embodiments of the disclosure are not limited in this respect.
The terminal device 120 may begin acquiring the fourth image, for example, in response to receiving a trigger operation for the regeneration control 403. The terminal device 120 may, for example, present the example 400C shown in fig. 4C. Example 400C is an example of a creation progress interface of a fourth image. Before the fourth image is acquired, the terminal device 120 may present, for example, instruction information on the generation progress based on the generation progress of the fourth image. For example, terminal device 120 may present text "XX%" and text "projected time in the interface shown in example 400C: XX hours XX minutes). The terminal device 120 can switch to other interfaces that can be provided by the presentation application 120, for example, in response to receiving a trigger operation for a determination control (e.g., control 404) in example 400C. Illustratively, the terminal device 120 can switch to presenting the content recommendation interface of the application 120 in response to detecting a trigger operation for the control 305.
In response to the fourth image being generated, the terminal device 120 may provide an example 400D as shown in fig. 4D. Example 400D may be an example of a content preview interface for a fourth image. As shown in fig. 4D, the terminal device 120 may present a fourth image in example 400D. In this example, a plurality of fourth images are presented. The content preview interface may include a style adjustment portal (e.g., portal 405). The terminal device 120 may determine that a style adjustment indication of a fourth image of the second avatar having the adjusted first style is received in response to detecting the trigger operation for the style adjustment entry. That is, the user may readjust the adjusted first style. Also included in example 400D are operation controls 406 and operation controls 407. The terminal device 120 may, for example, regenerate all of the fourth images in example 400D in response to receiving a trigger operation for the operation control 406. The terminal device 120 may, for example, switch to rendering the example 400E shown in fig. 4E in response to receiving a trigger operation for the operation control 407.
Example 400E is an example of a content editing interface for a fourth image. The interface shown in example 400E includes an option for a publication format for a fourth image. For example, an operational control 404 is shown in example 400E. The operation control 404 includes an option "picture" and an option "video". The terminal device 120 may determine to issue the fourth image in a picture format in response to receiving a trigger operation for the option "picture". The terminal device 120 may determine to publish the fourth image in the video format in response to receiving a trigger operation for the option "video". An operation control 409 may also be included in example 400E. The terminal device 120 may determine that an edit confirmation is received (i.e., editing for the fourth image is completed) in response to a trigger operation for the operation control 409, and switch to presenting a content distribution interface for the fourth image. The terminal device 120 may receive the distribution confirmation via the content distribution interface for the fourth image. The terminal device 120 may in turn issue the fourth image in response to receiving the issue confirmation. The content distribution interface of the fourth image is similar to that of the second image described above, and thus will not be described again.
Examples of adjusting the configuration information of the first style and acquiring the fourth image based on the adjusted configuration information are described above. Examples of style adjustment interfaces (i.e., examples 500A through 500J) are described in detail below in conjunction with fig. 5A through 5J.
The terminal device 120 may, for example, present a style adjustment interface as shown in example 500A in response to receiving a trigger operation for a style adjustment portal (e.g., portal 311) in a content preview interface (e.g., example 300G shown in fig. 3G). As shown in fig. 5A, example 500A may include region 510. Area 510 may display at least a portion of the fixed configuration information of the first style. Note that in the case where the first style includes fixed configuration information, the terminal device 120 may generate the hint word based on the adjustment information and the fixed configuration information for the first style. The terminal device 120 may provide the hint word to a model (i.e., a target model) that is used to generate the image. The terminal device 120 may acquire an output of the object model and acquire a fourth image based on the output.
Example 500A may also include a first input control 520, a region 530, and an input panel 540. The terminal device 120 may acquire the adjustment information input by the user via the input panel 540. In some embodiments, first prompt information for adjustment information may be presented in the first input control 520. The first cue information herein indicates an adjustable portion of the first style. The first hint information here may be, for example, the text "add color, object, shape, and other description user-defined your avatar" shown in example 500A.
Example 500A also includes cancel control 501. The terminal device 120 may determine to cancel the adjusting the configuration information of the first style in response to receiving a trigger operation for the cancel control 501. The terminal device 120 may, for example, return to rendering the content preview interface. Region 530 may include generation control 502. The terminal device 120 may also determine an adjusted first style based on the received adjustment information and fixed configuration information and obtain a fourth image of the second avatar having the adjusted first style in response to detecting the trigger operation for the generation control 502. The terminal device 120 may begin acquiring the fourth image, for example, in response to receiving a trigger operation for the generate control 502.
As shown in fig. 5B, the terminal device 120 may present the acquired adjustment information in the first input control 520 in example 500B. The terminal device 120 may also present all of the fixed configuration information of the first style, for example, in response to receiving a trigger operation for the control 510. Illustratively, the terminal device 120 can present the interface shown in example 500C in response to receiving a trigger operation for control 510 in example 500B. All of the fixed configuration information of the first style is included in control 510 in example 500C.
In the event that adjustment information is obtained via the first input control 520, the terminal device 120 may also determine the location of the input indicator in the first input control 520. The input indicator here may be, for example, the symbol "|" in the first input control 520 of fig. 5B. The terminal device 120 may present the menu 503 shown in example 500D in response to the input indicator being located inside the adjustment information. The menu 503 may include a plurality of operation options for adjustment information, such as an option "paste", an option "select", an option "full selection", and the like. The terminal device 120 may also present the menu 504 shown in example 500E, for example, in response to determining that a portion of the adjustment information is selected. The menu 504 may include a plurality of operation options for adjusting information for the selected portion, such as option "cut", option "copy", option "find", and the like.
The terminal device 120 may also present the interface shown in example 500F, for example, in response to receiving a preset operation (e.g., a cancel presentation operation for the input panel 540) for a style adjustment interface (e.g., example 500B). In example 500F, input panel 540 is no longer presented and the presentation area of first input control 520 is increased. The terminal device 120 can also present the interface shown in example 500G, for example, in response to receiving a preset operation (e.g., a delete operation) for the first input control 520. The interface of example 500G includes a card 505 therein. The card 505 includes prompt information prompting confirmation of whether to delete the adjustment information. The card 505 also includes a cancel control and a discard control. The terminal device 120 may cancel the deletion adjustment information in response to receiving a trigger operation for a cancel control in the card 505. The terminal device 120 can delete the received adjustment information presented in the first input control 520 in response to receiving a trigger operation for a discard control in the card 505.
In some embodiments, if an image generation indication associated with the adjustment information is received, the terminal device 120 may verify the received adjustment information. In other words, the terminal device 120 may determine whether the adjustment information satisfies a preset condition. The preset condition may relate to any suitable attribute related to the adjustment information, and may include, for example, whether the grammar of the text is correct, whether the word is appropriate, whether the number of words is less than a threshold value, and so on. The terminal device 120 may determine that an image generation indication was received and verify the received adjustment information, for example, in response to receiving a trigger operation for the generation control 502. In verifying the adjustment information, the terminal device 120 may present, for example, an interface as shown in example 500H. In example 500H, terminal device 120 can present a hint information in generation control 502 that indicates that it is currently being verified. The terminal device 120 may indicate that it is currently verifying, for example, by presenting a particular interface element.
If the adjustment information satisfies the preset condition, that is, passes the verification, the terminal device 120 may acquire the fourth image based on the adjustment information and the fixed configuration information. If the adjustment information does not meet the preset condition, i.e. fails to pass the verification, the terminal device 120 may present a corresponding prompt message to indicate that the adjustment information fails to pass the verification. As shown in fig. 5I, if the adjustment information is not verified, the terminal device 120 may present an example 500I. The terminal device 120 may present a prompt in the area 530 of example 500I to indicate that the information "cannot handle the prompt," please edit and retry "to prompt the user that the adjustment information is not verified.
In some embodiments, the terminal device 120 may also verify the adjustment information directly by default in response to receiving the adjustment information. As shown in fig. 5H, terminal device 120 may also present example 500H if the adjustment information is not verified. The terminal device 120 may present a pop 506 in example 500H. The terminal device 120 may, for example, present a prompt in the popup to prompt the user that the adjustment information is not verified, that you have reached the upper limit of XX.
By verifying the adjustment information input by the user, the validity of the adjustment can be ensured. Thus, the style can be conveniently adjusted by the user.
In such an embodiment, the user may make adjustments to existing styles to obtain styles that meet their own preferences. In this way, the threshold for style use is reduced, while also guaranteeing diversity of styles.
Although in the above description, the request, indication, or selection from the user is received or detected through an interface element such as a control, portal, etc., it should be understood that this is merely exemplary. In some embodiments, there may be no such interface elements, but rather the request, indication, or selection may be given in natural language (including text or speech).
Example procedure
Fig. 6 illustrates a flow chart of a process 600 for image generation according to some embodiments of the present disclosure. Process 600 may be implemented at terminal device 120. The process 600 is described below with reference to fig. 1.
At block 610, the terminal device 120 presents a first image of a first avatar including media data representing an appearance feature of a first object and an element usage portal of the first element.
At block 620, the terminal device 120 presents a second image of a second avatar having the first element, the second avatar including media data representing an appearance feature of a second object, based on the trigger for the element use portal.
In some embodiments, in response to a trigger to an element use portal, an element information interface of a first element is presented, the element information interface including at least one of: generating a portal for a first image of the first element, or a viewing portal for each of at least one image having the first element; acquiring a second image based on an operation associated with the element information interface; and in response to acquiring the second image, presenting the second image.
In some embodiments, acquiring the second image includes: in response to a trigger to the first image generation portal, a second image is acquired based on the first element and the second avatar.
In some embodiments, acquiring the second image includes: in response to triggering a view entry for a third image of the at least one image, presenting a view interface for the third image, the view interface including a second image generation entry for the first element; and in response to a trigger for the second image generation portal, acquiring a second image based on a third element and a second avatar possessed by the third image.
In some embodiments, the process 600 further comprises: responsive to an element adjustment indication for the second image, presenting an element adjustment interface; receiving adjustment information for a first element via an element adjustment interface; and based on the received adjustment information, presenting a fourth image of the second avatar having the adjusted first element.
In some embodiments, the element adjustment interface includes at least a portion of the fixed configuration information of the first element, and the process 600 further includes: and acquiring a fourth image based on the received adjustment information and the fixed configuration information.
In some embodiments, the element adjustment interface includes at least one of: an input control for receiving adjustment information, a hint information for the adjustment information, the hint information indicating an adjustable portion of the first element, or the second image.
In some embodiments, the process 600 further comprises: stopping displaying the second image in the element adjustment interface in response to a preset operation for the element adjustment interface; and increasing the presentation area of the input control.
In some embodiments, the element adjustment indication is received by: presenting a content preview interface comprising a second image and an element adjustment portal; and receiving an element adjustment indication via an element adjustment entry.
In some embodiments, the process 600 further comprises: in response to the publishing indication of the second image, presenting a content editing interface for the second image; in response to receiving the edit confirmation via the content editing interface, presenting a content publication page for the second image; and publishing the second image in response to receiving the publication acknowledgement via the content publication page.
In some embodiments, the content editing interface includes an option for a post format of the second image.
In some embodiments, the process 600 further comprises: responsive to the avatar creation indication, presenting an image acquisition interface; determining a plurality of images of the second object based on at least one of the following operations via the image acquisition interface: a selection operation of a photographed image, or an image photographing operation; and acquiring a second avatar based on the plurality of images in response to the avatar creation confirmation.
In some embodiments, the avatar-creation indication is received as follows: presenting a avatar creation portal for the second avatar based on the trigger for the element usage portal; and receiving a avatar-creation indication via the avatar-creation portal.
In some embodiments, the first element includes a parent element and a child element obtained by partially adjusting the parent element.
In some embodiments, the parent element includes at least one of: predetermined elements, or user-customized elements.
In some embodiments, the first element comprises at least one of: image style, image background or image foreground.
Example apparatus and apparatus
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 7 illustrates a schematic block diagram of an apparatus 700 for image generation according to some embodiments of the present disclosure. The apparatus 700 may be implemented as or included in the terminal device 120. The various modules/components in apparatus 700 may be implemented in hardware, software, firmware, or any combination thereof.
As shown in fig. 7, the apparatus 700 includes a first rendering module 710 configured to render a first image of a first avatar including media data representing an appearance feature of a first object and an element usage portal of the first element. The apparatus 700 further comprises a second rendering module 720 configured to render a second image of the second avatar with the first element based on the trigger for the element use portal, the second avatar comprising media data representing an appearance feature of the second object.
In some embodiments, the second presentation module 720 is further configured to: in response to a trigger to an element use portal, presenting an element information interface of a first element, the element information interface including at least one of: generating a portal for a first image of the first element, or a viewing portal for each of at least one image having the first element; acquiring a second image based on an operation associated with the element information interface; and in response to acquiring the second image, presenting the second image.
In some embodiments, the second presentation module 720 is further configured to: in response to a trigger to the first image generation portal, a second image is acquired based on the first element and the second avatar.
In some embodiments, the second presentation module 720 is further configured to: in response to triggering a view entry for a third image of the at least one image, presenting a view interface for the third image, the view interface including a second image generation entry for the first element; and in response to a trigger for the second image generation portal, acquiring a second image based on a third element and a second avatar possessed by the third image.
In some embodiments, the second presentation module 720 is further configured to: responsive to an element adjustment indication for the second image, presenting an element adjustment interface; receiving adjustment information for a first element via an element adjustment interface; and based on the received adjustment information, presenting a fourth image of the second avatar having the adjusted first element.
In some embodiments, the element adjustment interface includes at least a portion of the fixed configuration information of the first element, and the apparatus 700 further includes an image acquisition module configured to: and acquiring a fourth image based on the received adjustment information and the fixed configuration information.
In some embodiments, the element adjustment interface includes at least one of: an input control for receiving adjustment information, a hint information for the adjustment information, the hint information indicating an adjustable portion of the first element, or the second image.
In some embodiments, the second presentation module 720 is further configured to: stopping displaying the second image in the element adjustment interface in response to a preset operation for the element adjustment interface; and increasing the presentation area of the input control.
In some embodiments, the element adjustment indication is received by: presenting a content preview interface comprising a second image and an element adjustment portal; and receiving an element adjustment indication via an element adjustment entry.
In some embodiments, the second presentation module 720 is further configured to: in response to the publishing indication of the second image, presenting a content editing interface for the second image; in response to receiving the edit confirmation via the content editing interface, a content publication page for the second image is presented, and the apparatus 700 further includes a publication module configured to: the second image is published in response to receiving a publication acknowledgement via the content publication page.
In some embodiments, the content editing interface includes an option for a post format of the second image.
In some embodiments, the apparatus 700 further comprises: an interface presentation module configured to present an image acquisition interface in response to the avatar creation indication; an image acquisition module configured to determine a plurality of images of the second object based on at least one of the following operations via the image acquisition interface: a selection operation of a photographed image, or an image photographing operation; and a character acquisition module configured to acquire a second avatar based on the plurality of images in response to the character creation confirmation.
In some embodiments, the avatar-creation indication is received as follows: presenting a avatar creation portal for the second avatar based on the trigger for the element usage portal; and receiving a avatar-creation indication via the avatar-creation portal.
In some embodiments, the first element includes a parent element and a child element obtained by partially adjusting the parent element.
In some embodiments, the parent element includes at least one of: predetermined elements, or user-customized elements.
In some embodiments, the first element comprises at least one of: image style, image background or image foreground.
The elements and/or modules included in apparatus 700 may be implemented in various manners, including software, hardware, firmware, or any combination thereof. In some embodiments, one or more units and/or modules may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to or in lieu of machine-executable instructions, some or all of the units and/or modules in apparatus 700 may be implemented at least in part by one or more hardware logic components. By way of example and not limitation, exemplary types of hardware logic components that can be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
It will be appreciated that one or more steps of the above methods may be performed by suitable electronic devices or combinations of electronic devices. Such an electronic device or combination of electronic devices may include, for example, terminal device 120 and server 130 in fig. 1.
Fig. 8 illustrates a block diagram of an electronic device 800 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 800 illustrated in fig. 8 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 800 shown in fig. 8 may be used to implement the terminal device 120 of fig. 1, or the apparatus 700 of fig. 7.
As shown in fig. 8, the electronic device 800 is in the form of a general-purpose electronic device. Components of electronic device 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage device 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860. The processing unit 810 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 820. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device 800.
Electronic device 800 typically includes multiple computer storage media. Such a medium may be any available medium that is accessible by electronic device 800 including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 830 may be a removable or non-removable medium and may include machine-readable media such as flash drives, magnetic disks, or any other medium that may be used to store information and/or data and that may be accessed within electronic device 800.
The electronic device 800 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 8, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 820 may include a computer program product 825 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 840 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 800 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communications connection. Thus, the electronic device 800 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 850 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 860 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with the electronic device 800, or with any device (e.g., network card, modem, etc.) that enables the electronic device 800 to communicate with one or more other electronic devices, as desired, via the communication unit 840. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above is provided. According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (19)

1. An image generation method, comprising:
presenting a first image of a first avatar having a first element and an element usage portal of the first element, the first avatar including media data representing an appearance feature of a first object; and
based on a trigger for the element usage portal, a second image of a second avatar is presented with the first element, the second avatar including media data representing an appearance feature of a second object.
2. The method of claim 1, wherein presenting the second image comprises:
in response to a trigger for the element use portal, presenting an element information interface of the first element, the element information interface including at least one of:
Generating a portal for a first image of the first element, or
A view portal for each of at least one image having the first element;
acquiring the second image based on an operation associated with the element information interface; and
in response to acquiring the second image, the second image is presented.
3. The method of claim 2, wherein acquiring the second image comprises:
the second image is acquired based on the first element and the second avatar in response to a trigger for the first image generation portal.
4. The method of claim 2, wherein acquiring the second image comprises:
in response to a trigger to view a portal to view a third image of the at least one image, presenting a view interface of the third image, the view interface including a second image generation portal for the first element; and
in response to a trigger for the second image generation portal, the second image is acquired based on the third element and the second avatar possessed by the third image.
5. The method of claim 1, further comprising:
responsive to an element adjustment indication for the second image, presenting an element adjustment interface;
Receiving, via the element adjustment interface, adjustment information for the first element; and
based on the received adjustment information, a fourth image of the second avatar with the adjusted first element is presented.
6. The method of claim 5, wherein the element adjustment interface includes at least a portion of fixed configuration information of the first element, and the method further comprises:
and acquiring the fourth image based on the received adjustment information and the fixed configuration information.
7. The method of claim 5, wherein the element adjustment interface comprises at least one of:
an input control for receiving the adjustment information,
a hint for the adjustment information, the hint indicating an adjustable portion of the first element, or
The second image.
8. The method of claim 7, further comprising:
stopping displaying the second image in the element adjustment interface in response to a preset operation for the element adjustment interface; and
and increasing the presentation area of the input control.
9. The method of claim 1, wherein the element adjustment indication is received by:
Presenting a content preview interface comprising the second image and an element adjustment portal; and
the element adjustment indication is received via the element adjustment portal.
10. The method of claim 1, further comprising:
in response to the publishing indication of the second image, presenting a content editing interface for the second image;
in response to receiving an edit confirmation via the content editing interface, presenting a content publication page for the second image; and
the second image is published in response to receiving a publication acknowledgement via the content publication page.
11. The method of claim 10, wherein the content editing interface includes an option for a post format of the second image.
12. The method of claim 1, further comprising:
responsive to the avatar creation indication, presenting an image acquisition interface;
determining a plurality of images of the second object based on at least one of the following operations via the image acquisition interface:
selecting an image taken, or
An image photographing operation; and
the second avatar is acquired based on the plurality of images in response to an avatar creation confirmation.
13. The method of claim 12, wherein the avatar creation indication is received as follows:
presenting a character creation portal for the second avatar based on a trigger for the element usage portal; and
the avatar creation indication is received via the avatar creation portal.
14. The method of claim 1, wherein the first element comprises a parent element and a child element obtained by partially adjusting the parent element.
15. The method of claim 14, wherein the parent element comprises at least one of:
predetermined elements, or
User-customized elements.
16. The method of claim 1, wherein the first element comprises at least one of: image style, image background or image foreground.
17. An apparatus for image generation, comprising:
a first rendering module configured to render a first image of a first avatar and an element usage portal of the first element, the first avatar including media data representing an appearance feature of a first object; and
a second rendering module configured to render a second image of a second avatar having the first element based on a trigger for the element usage portal, the second avatar including media data representing an appearance feature of a second object.
18. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit cause the electronic device to perform the method of any one of claims 1 to 16.
19. A computer readable storage medium having stored thereon a computer program executable by a processor to implement the method of any of claims 1 to 16.
CN202311685883.XA 2023-12-08 2023-12-08 Method, apparatus, device and storage medium for image generation Pending CN117785002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311685883.XA CN117785002A (en) 2023-12-08 2023-12-08 Method, apparatus, device and storage medium for image generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311685883.XA CN117785002A (en) 2023-12-08 2023-12-08 Method, apparatus, device and storage medium for image generation

Publications (1)

Publication Number Publication Date
CN117785002A true CN117785002A (en) 2024-03-29

Family

ID=90388090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311685883.XA Pending CN117785002A (en) 2023-12-08 2023-12-08 Method, apparatus, device and storage medium for image generation

Country Status (1)

Country Link
CN (1) CN117785002A (en)

Similar Documents

Publication Publication Date Title
US10743042B2 (en) Techniques for integration of media content from mobile device to broadcast
JP6683387B2 (en) Resource sharing method, terminal, and storage medium
CN107831974B (en) Information sharing method and device and storage medium
US10979235B2 (en) Content management system supporting third-party code
US9559992B2 (en) System and method for updating information in an instant messaging application
US20150142884A1 (en) Image Sharing for Online Collaborations
US11113411B2 (en) Authentication security model for a content management system
US20150046842A1 (en) System for providing a social media compilation
US20220078502A1 (en) Techniques for obtaining and distributing user-generated content to internet-based content providers
US11356496B2 (en) Systems and methods of publishing a design
CN113157157A (en) Interactive image management method, device, equipment and storage medium based on live broadcast
US20230050263A1 (en) Systems and Methods of Generating a Website
WO2023179549A1 (en) Document block sharing method, apparatus and system, and storage medium
CN108076357B (en) Media content pushing method, device and system
CN112819923A (en) Method and device for generating electronic business card and computer storage medium
CN117201883A (en) Method, apparatus, device and storage medium for image editing
CN111610905A (en) Multimedia data processing method, device, client and storage medium
WO2023239468A1 (en) Cross-application componentized document generation
CN117785002A (en) Method, apparatus, device and storage medium for image generation
CN115510348A (en) Method, apparatus, device and storage medium for content presentation
CN117687553A (en) Method, apparatus, device and storage medium for image generation
WO2024087533A1 (en) Expression image sharing method and apparatus, computer device, and storage medium
US20240214338A1 (en) Social network application data processing method, computer device, and storage medium
KR101983837B1 (en) A method and system for producing an image based on a user-feedbackable bots, and a non-transient computer-readable recording medium
CN116107459A (en) Page display method, device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination