CN113325983A - Virtual image processing method, device, terminal and storage medium - Google Patents

Virtual image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN113325983A
CN113325983A CN202110736829.8A CN202110736829A CN113325983A CN 113325983 A CN113325983 A CN 113325983A CN 202110736829 A CN202110736829 A CN 202110736829A CN 113325983 A CN113325983 A CN 113325983A
Authority
CN
China
Prior art keywords
avatar
editing
virtual image
terminal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110736829.8A
Other languages
Chinese (zh)
Inventor
陈雪丹
赵雪
黄志斌
夏伟涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202110736829.8A priority Critical patent/CN113325983A/en
Publication of CN113325983A publication Critical patent/CN113325983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application provides a virtual image processing method, a virtual image processing device, a terminal and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying at least one alternative virtual image in a selection interface, wherein the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when a communication message is received; obtaining a second avatar based on editing operation of the displayed first avatar; displaying the second virtual image in the selection interface; setting the second avatar as the target avatar in response to a setting operation of the second avatar. The method can adopt the mode of displaying the virtual image in a suspension manner to prompt the message, the message prompting mode is flexible and interesting, and the prompting effect is enhanced. And in addition, the user is allowed to self-define the target virtual image, and the personalized requirements of the user are fully met.

Description

Virtual image processing method, device, terminal and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for processing an avatar.
Background
With the widespread development of computer technology, various communication messages, such as short messages, instant communication messages, incoming requests and the like, are received by a terminal, and when the terminal receives the communication messages, a prompt is given to a user. The currently common prompting method is to play a prompting audio set by a user. However, the above information prompting method is too single and not flexible enough.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for processing an avatar, and the method and the device for prompting information for a user are more flexible. The technical scheme is as follows:
in one aspect, an avatar processing method is provided, the method including:
displaying at least one alternative virtual image in a selection interface, wherein the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when a communication message is received;
obtaining a second avatar based on editing operation of the displayed first avatar;
displaying the second virtual image in the selection interface;
setting the second avatar as the target avatar in response to a setting operation of the second avatar.
In one possible implementation, the first avatar includes a first body element, a first decoration element and a first sound element, and the second avatar is obtained based on an editing operation on the displayed first avatar, including at least one of:
responding to the main body editing operation of the first virtual image, editing the first main body element to obtain a second virtual image;
responding to the decoration editing operation of the first virtual image, editing the first decoration element to obtain a second virtual image;
and responding to the sound editing operation of the first virtual image, editing the first sound element to obtain the second virtual image.
In one possible implementation, the first avatar includes a plurality of elements, and the obtaining of the second avatar based on an editing operation on the displayed first avatar includes:
responding to an editing instruction operation of the first virtual image, and displaying an editing interface, wherein the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image;
and editing at least one element in the first virtual image based on at least one editing control in the multiple editing controls to obtain the second virtual image.
In one possible implementation, the first avatar includes a first body element, and the editing of at least one element in the first avatar based on at least one editing control of the plurality of editing controls to obtain the second avatar includes:
responding to the triggering operation of the main body editing control, and acquiring a second main body element;
and replacing the first main body element in the first virtual image with the second main body element to obtain the second virtual image.
In a possible implementation manner, the obtaining a second body element in response to a triggering operation on the body editing control includes:
responding to the triggering operation of the main body editing control, displaying an image library, wherein the image library comprises at least one image, and acquiring the second main body element based on the image selected from the image library; alternatively, the first and second electrodes may be,
and responding to the triggering operation of the main body editing control, displaying a video library, wherein the video library comprises at least one video, and acquiring the second main body element based on the video selected from the video library.
In a possible implementation manner, the obtaining the second body element based on the image selected from the image library includes:
displaying an image editing interface, wherein the image editing interface comprises a mask, the mask comprises a transparent area and an opaque area, the shape and the size of the transparent area are the same as those of a first main body element in the first virtual image, and the mask covers the image;
intercepting a target image based on the transparent area, wherein the target image is formed by an image area positioned in the transparent area in the image;
determining the target image as the second subject element.
In a possible implementation manner, the obtaining the second body element based on a video selected from the video library includes:
displaying a video editing interface, wherein the video editing interface comprises an intercepting control and a mask, the mask comprises a transparent area and an opaque area, the shape and the size of the transparent area are the same as those of a first main body element in the first virtual image, and the mask covers the video;
intercepting a video clip based on the time period indicated by the interception control and the transparent area, wherein the video clip is composed of image areas which are positioned in the transparent area in the video within the time period;
determining the video clip as the second body element.
In a possible implementation manner, before intercepting the video segment based on the time period indicated by the interception control and the transparent area, the method further includes:
and determining the time period indicated by the interception control based on the touch operation of the interception control.
In one possible implementation, the first avatar includes a first sound element, and the editing of at least one element of the first avatar based on at least one editing control of the plurality of editing controls to obtain the second avatar includes:
responding to the triggering operation of the sound editing control, and acquiring a second sound element;
and replacing the first sound element in the first virtual image with the second sound element to obtain the second virtual image.
In one possible implementation manner, the obtaining the second sound element in response to the triggering operation on the sound editing control includes:
in response to a triggering operation on the sound editing control, displaying an audio library, wherein the audio library comprises at least one audio;
and acquiring the second sound element based on the selected audio from the audio library.
In one possible implementation manner, the obtaining the second sound element based on the audio selected from the audio library includes:
displaying an interception control in the editing interface;
based on touch operation of the intercepting control, intercepting an audio clip indicated by the intercepting control from the audio;
determining the audio clip as the second sound element.
In a possible implementation manner, after the obtaining the second main body element based on the video selected from the video library, the method further includes:
and displaying an audio enabling option in the editing interface under the condition that the video has the corresponding background audio, wherein the audio enabling option is used for determining the background audio corresponding to the video as the sound element of the second virtual image.
In a possible implementation manner, after the editing at least one element in the first avatar based on at least one editing control of the multiple editing controls to obtain the second avatar, the method further includes:
displaying in the editing interface: a display effect of the second avatar upon receipt of the communication message.
In a possible implementation manner, the selection interface includes a first display area and a second display area, the first display area displays the at least one candidate avatar, and the second display area displays: the effect of the selected avatar is displayed suspended in the desktop.
In a possible implementation manner, the displaying the second avatar in the selection interface includes:
displaying the second avatar in the selected state in the first display area;
the setting the second avatar as the target avatar in response to a setting operation of the second avatar includes:
setting the second avatar as the target avatar in response to a confirmation operation of the selected second avatar in the first display area.
In one possible implementation, the selected avatar includes a sound element, the method further including:
and playing the sound elements of the selected avatar in the process of displaying the effect of the selected avatar in the second display area.
In one possible implementation, after the setting of the second avatar as the target avatar in response to the setting operation of the second avatar, the method further includes:
in response to receiving the communication message, the second avatar is hovered displayed in a current interface.
In a possible implementation manner, the selection interface includes an option corresponding to the first prompt type and an option corresponding to the second prompt type, and the method further includes:
responding to a trigger operation of an option corresponding to the first prompt type, and determining that the message prompt type is the first prompt type, wherein the first prompt type is as follows: displaying the target avatar in suspension upon receipt of a communication message, but not playing sound elements in the target avatar; alternatively, the first and second electrodes may be,
responding to a trigger operation of an option corresponding to the second prompt type, and determining that the message prompt type is the second prompt type, wherein the second prompt type is as follows: displaying the target avatar in suspension upon receipt of the communication message, and playing the sound elements in the target avatar.
In one possible implementation, the selection interface includes a do-not-disturb option, and the method further includes:
responding to the trigger operation of the disturbance-free option, and starting a disturbance-free function, wherein the disturbance-free function is as follows: in the silent mode, the target avatar is displayed in suspension upon receipt of a communication message, but no sound elements in the target avatar are played.
In one possible implementation, before displaying at least one alternative avatar in the selection interface, the method further includes:
displaying an avatar setting interface, wherein the avatar setting interface comprises a plurality of application identifiers;
and responding to the selection operation of the target application identifier corresponding to the target application program, and displaying the selection interface corresponding to the target application identifier, wherein the selection interface is used for selecting the target virtual image which is displayed in a floating manner when the communication message of the target application program is received.
In another aspect, there is provided an avatar processing apparatus, the apparatus including:
the first display module is configured to display at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a floating mode when a communication message is received;
an image editing module configured to obtain a second avatar based on an editing operation on the displayed first avatar;
the second display module is configured to display the second virtual image in the selection interface;
an avatar setting module configured to set the second avatar as the target avatar in response to a setting operation on the second avatar.
In one possible implementation, the first avatar includes a first body element, a first decorative element, and a first sound element, the avatar editing module configured to perform at least one of:
responding to the main body editing operation of the first virtual image, editing the first main body element to obtain a second virtual image;
responding to the decoration editing operation of the first virtual image, editing the first decoration element to obtain a second virtual image;
and responding to the sound editing operation of the first virtual image, editing the first sound element to obtain the second virtual image.
In one possible implementation, the first avatar includes a plurality of elements, and the avatar editing module includes:
a control display sub-module configured to display an editing interface in response to an editing instruction operation on the first avatar, the editing interface including a plurality of editing controls, each editing control being used for editing one element in the first avatar;
and the image editing sub-module is configured to edit at least one element in the first avatar based on at least one editing control in the multiple editing controls to obtain the second avatar.
In one possible implementation, the first avatar includes a first body element, and the avatar editing sub-module includes:
a main body element obtaining unit configured to obtain a second main body element in response to a trigger operation on a main body editing control;
a body element replacing unit configured to replace a first body element in the first avatar with the second body element, resulting in the second avatar.
In one possible implementation manner, the subject element obtaining unit includes:
a first obtaining subunit, configured to display an image library in response to a triggering operation on the main body editing control, where the image library includes at least one image, and obtain the second main body element based on an image selected from the image library; alternatively, the first and second electrodes may be,
and the second acquiring subunit is configured to respond to the triggering operation of the main body editing control, display a video library, wherein the video library comprises at least one video, and acquire the second main body element based on the video selected from the video library.
In one possible implementation, the first obtaining subunit is configured to display an image editing interface, where the image editing interface includes a mask that includes a transparent region and an opaque region, where the transparent region has a shape and a size that are the same as those of the first body element in the first avatar, and the mask is overlaid on the image; intercepting a target image based on the transparent area, wherein the target image is formed by an image area positioned in the transparent area in the image; determining the target image as the second subject element.
In one possible implementation manner, the second obtaining subunit is configured to display a video editing interface, where the video editing interface includes a cut-out control and a mask, where the mask includes a transparent area and an opaque area, the shape and size of the transparent area are the same as those of the first main body element in the first avatar, and the mask is overlaid on the video; intercepting a video clip based on the time period indicated by the interception control and the transparent area, wherein the video clip is composed of image areas which are positioned in the transparent area in the video within the time period; determining the video clip as the second body element.
In a possible implementation manner, the second obtaining subunit is further configured to determine, based on a touch operation on the interception control, a time period indicated by the interception control.
In one possible implementation, the first avatar includes a first sound element, the avatar editing sub-module includes:
a sound element acquisition unit configured to acquire a second sound element in response to a trigger operation on the sound editing control;
a sound element replacing unit configured to replace a first sound element in the first avatar with the second sound element, resulting in the second avatar.
In one possible implementation manner, the sound element obtaining unit includes:
a display subunit configured to display an audio library in response to a triggering operation on the sound editing control, the audio library including at least one audio;
a third obtaining subunit configured to obtain the second sound element based on the audio selected from the audio library.
In a possible implementation manner, the third obtaining subunit is configured to display an interception control in the editing interface; based on touch operation of the intercepting control, intercepting an audio clip indicated by the intercepting control from the audio; determining the audio clip as the second sound element.
In one possible implementation, the control display sub-module is further configured to display an audio enable option in the editing interface if the video has corresponding background audio, where the audio enable option is used to determine the background audio corresponding to the video as a sound element of the second avatar.
In one possible implementation, the character editing module further includes:
an effect display sub-module configured to display in the editing interface: a display effect of the second avatar upon receipt of the communication message.
In a possible implementation manner, the selection interface includes a first display area and a second display area, the first display area displays the at least one candidate avatar, and the second display area displays: the effect of the selected avatar is displayed suspended in the desktop.
In one possible implementation, the second display module is configured to display the second avatar in a selected state in the first display area;
the character setting module is configured to set the second avatar as the target avatar in response to a confirmation operation of the selected second avatar in the first display area.
In one possible implementation, the selected avatar includes a sound element, the apparatus further including:
an audio playing module configured to play a sound element of the selected avatar in a process of displaying an effect of the selected avatar in the second display area.
In one possible implementation, the apparatus further includes:
a third display module configured to hover display the second avatar in the current interface in response to receiving the communication message.
In a possible implementation manner, the selection interface includes an option corresponding to a first prompt type and an option corresponding to a second prompt type, and the apparatus further includes:
a first determining module, configured to determine, in response to a trigger operation on an option corresponding to the first prompt type, that the message prompt type is the first prompt type, where the first prompt type is: displaying the target avatar in suspension upon receipt of a communication message, but not playing sound elements in the target avatar; alternatively, the first and second electrodes may be,
a second determining module, configured to determine, in response to a trigger operation on an option corresponding to the second prompt type, that the message prompt type is the second prompt type, where the second prompt type is: displaying the target avatar in suspension upon receipt of the communication message, and playing the sound elements in the target avatar.
In one possible implementation, the selection interface includes a do-not-disturb option, and the apparatus further includes:
a function starting module configured to start a do-not-disturb function in response to a triggering operation on the do-not-disturb option, where the do-not-disturb function is: in the silent mode, the target avatar is displayed in suspension upon receipt of a communication message, but no sound elements in the target avatar are played.
In one possible implementation, the first display module is further configured to display an avatar setting interface, the avatar setting interface including a plurality of application identifiers; and responding to the selection operation of the target application identifier corresponding to the target application program, and displaying the selection interface corresponding to the target application identifier, wherein the selection interface is used for selecting the target virtual image which is displayed in a floating manner when the communication message of the target application program is received.
In another aspect, a terminal is provided, where the terminal includes a processor and a memory, and the memory stores at least one instruction, where the instruction is loaded and executed by the processor to implement the operation performed in the avatar processing method in any one of the above possible implementation manners.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed in the avatar processing method in any one of the above possible implementation manners.
In yet another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are loaded and executed by a processor to implement the operations performed in the avatar processing method in any of the possible implementations described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the embodiment of the application provides a novel message prompting mode, which can adopt a mode of displaying the virtual image in a suspending way to prompt messages, so that a user can know that the terminal receives communication messages through the virtual image displayed in a suspending way, the message prompting mode is flexible and interesting, and the prompting effect is enhanced. In addition, the embodiment of the application allows the user to define the virtual image by user, can provide the alternative virtual image for the user, enables the user to select the alternative virtual image, further edits the virtual image to obtain the defined virtual image, and fully meets the individual requirements of the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an avatar processing method according to an embodiment of the present application;
fig. 3 is a flowchart of an avatar processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a selection interface provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an editing interface provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a video library display interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a video editing interface provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a video editing interface provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of an editing interface provided by an embodiment of the present application;
fig. 10 is a flowchart of an avatar processing method according to an embodiment of the present application;
fig. 11 is a flowchart of an avatar processing method according to an embodiment of the present application;
fig. 12 is a block diagram of an avatar processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It will be understood that, as used herein, the terms "each," "plurality," and "either," and the like, include two or more, each referring to each of the corresponding plurality and any one referring to any one of the corresponding plurality. For example, the plurality of avatars includes 10 avatars, and each avatar refers to each of the 10 avatars, and any avatar refers to any one of the 10 avatars.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. The terminal 101 is installed with a prompt application served by the server 102, and the terminal 101 can implement functions such as data transmission, message interaction, and the like through the target application. Optionally, the terminal 101 is a computer, a mobile phone, a tablet computer, or other terminal. Optionally, the target application is a target application in an operating system of the terminal 101, or a target application provided by a third party. The target application has a message prompt function, and optionally, the target application has other functions besides the message prompt function, for example, an interface decoration function, an image and video editing function, an audio editing function, and the like, which is not limited in this application. Optionally, the server 102 is a background server of the target application program or a cloud server providing services such as cloud computing and cloud storage.
The server 102 is configured to provide at least one alternative avatar for the terminal 101, and the terminal 101 is configured to display the alternative avatar in a selection interface of the target application, so that the user can select a favorite avatar therefrom. The user can edit the favorite virtual image to obtain the self-defined virtual image. Correspondingly, the terminal 101 is further configured to edit the displayed first avatar based on the editing operation of the user to obtain a second avatar. The user can also set the second avatar as the target avatar, after which the terminal 101 can prompt the user to receive the communication message by displaying the second avatar in floating fashion when the communication message is received. The avatar processing method in the embodiment of the application can be applied to scenes for prompting any type of communication messages. For example, the target avatar corresponding to the short message, the instant messaging message, the incoming call request or other messaging messages is set through the method provided by the application, and when any messaging message is received, the prompt is performed through the suspension display of the corresponding target avatar.
Fig. 2 is a flowchart of an avatar processing method according to an embodiment of the present application. Referring to fig. 2, the embodiment includes:
201. the terminal displays at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when the communication message is received.
The avatar includes, in terms of the type of avatar, a cartoon character and an image of a real character, for example, the cartoon character is a cartoon character of a cat, a cartoon character of a person or other cartoon characters, and the image of the real character is an image of a real cat, an image of a real person or other images of real characters. The avatar includes a dynamic avatar and a static avatar in terms of the display effect of the avatar. From the source of the avatar, the avatar includes an avatar acquired from the server and an avatar acquired locally from the terminal. The avatar includes an original avatar not edited by the user and an avatar edited by the user, in terms of whether the avatar is edited by the user or not. From the function of the avatar, the avatar is used for message prompting. Any avatar can be set as a target avatar, and after the target avatar is set, the terminal prompts the user to receive the communication message by displaying the avatar in a floating manner when the communication message is received. From the configuration of the avatar, the avatar includes at least one of a body element, a decoration element and a sound element, wherein the decoration element is used to decorate the body element, and the body element and the decoration element configure a picture of the avatar. The body element and the decorative element include any of the avatars described above. The sound elements are used as background audio to be synchronously played when the terminal displays the virtual image in a suspension manner, so that the message prompt effect of the virtual image is enhanced.
Optionally, the selection interface further displays introduction information of the avatar, for example, a name of the avatar, a number of resources required for obtaining the avatar, an effect of performing message prompt through the avatar, a heat value of the avatar, a tag of the avatar, and the like, where the tag includes a dynamic avatar, a static avatar, a cartoon, and the like, which is not limited in this embodiment of the application.
Optionally, the at least one candidate avatar is displayed in the selection interface in a classified manner, for example, the selection interface includes a plurality of categories, and when the terminal detects a trigger operation for any category, the terminal displays the at least one avatar corresponding to the category in the selection interface. For example, the selection interface includes two categories, the first category is a dynamic avatar, the second category is a static avatar, and the like, when the terminal detects a trigger operation on the first category, at least one dynamic avatar is displayed on the selection interface, and when the terminal detects a trigger operation on the second category, at least one static avatar is displayed on the selection interface. Optionally, the selection interface further includes other categories, for example, categories including star characters, pets, animation images, and the like, which is not limited in this embodiment of the application. By displaying the virtual images in a classified manner in the selection interface, the user can conveniently and quickly find the favorite virtual images.
Optionally, the selection interface is a selection interface of a target application installed on the terminal, and the function of the target application is to suspend and display a target avatar set by a user to prompt the reception of the communication message when the communication message is determined to be received. The terminal can log in the target application program based on the user identification and then set a target avatar based on the selection interface. The user identifier is used to represent an identity of the user, and optionally, the user identifier is an account number registered by the user or a mobile phone number of the user.
202. And the terminal obtains a second virtual image based on the editing operation of the displayed first virtual image.
Wherein the first avatar is any one of the alternative avatars. Optionally, the editing operation includes cutting the first avatar, adding a material to the first avatar, replacing an element in the first avatar, and the like, which is not limited in this embodiment of the application.
In a possible implementation manner, the first avatar includes a first main element, a first decoration element and a first sound element, and correspondingly, the terminal obtains a second avatar based on an editing operation on the displayed first avatar, and the method includes at least one of the following steps: the terminal responds to the main body editing operation of the first virtual image, and edits the first main body element to obtain a second virtual image; the terminal responds to the decoration editing operation of the first virtual image, and edits the first decoration element to obtain a second virtual image; and the terminal responds to the sound editing operation of the first virtual image and edits the first sound element to obtain a second virtual image.
In the embodiment of the application, the first virtual image comprises various elements, namely a main body element, a decoration element and a sound element, a user can select any one or more elements in the first virtual image to edit, the editing mode is rich and flexible, the user can define the virtual image more conveniently, and the personalized requirements of the user are met.
203. And the terminal displays the second virtual image in the selection interface.
204. The terminal sets the second avatar as the target avatar in response to a setting operation of the second avatar.
After the terminal sets the second avatar as a target avatar, the terminal prompts the user to receive the communication message by displaying the second avatar in a floating manner when receiving the communication message.
It should be noted that the user can also select a favorite avatar from the displayed avatars, and directly set the avatar as the target avatar without editing the avatar.
The embodiment of the application provides a novel message prompting mode, which can adopt a mode of displaying the virtual image in a suspending way to prompt messages, so that a user can know that the terminal receives communication messages through the virtual image displayed in a suspending way, the message prompting mode is flexible and interesting, and the prompting effect is enhanced. In addition, the embodiment of the application allows the user to define the virtual image by user, can provide the alternative virtual image for the user, enables the user to select the alternative virtual image, further edits the virtual image to obtain the defined virtual image, and fully meets the individual requirements of the user.
Fig. 3 is a flowchart of an avatar processing method according to an embodiment of the present application. In this embodiment, the body elements in the avatar can be customized. Referring to fig. 3, the embodiment includes:
301. the terminal displays at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when the communication message is received.
Optionally, before the terminal displays the at least one alternative avatar, the terminal obtains the at least one alternative avatar from the server, that is, after the terminal logs in the target application program based on the user identifier, the terminal sends the user identifier to the server, the server obtains the at least one alternative avatar recommended for the user according to the user identifier, sends the at least one alternative avatar to the terminal, and the terminal receives the at least one alternative avatar.
In one possible implementation manner, the selection interface includes a first display area and a second display area, the first display area displays at least one candidate avatar, and the second display area displays: the effect of the selected avatar is displayed suspended in the desktop. Optionally, the second display area is located above the first display area. Optionally, a virtual desktop is displayed in the second display area, and the currently selected avatar is displayed in a floating manner on the desktop. And the user switches the selected virtual image, and the virtual image displayed in the second display area is switched accordingly, so that the user can browse the display effect of any optional virtual image in the second display area.
In the embodiment of the application, after the currently selected avatar is set as the target avatar in the second display area, when the communication message is received, the effect of the suspended display of the avatar in the desktop is simulated, so that the user can know the display effect of the currently selected avatar before setting the avatar as the target avatar, the user can select the favorite avatar more conveniently, and the display effect of the target avatar set by the user is ensured to accord with the preference of the user.
In a possible implementation manner, the selected avatar includes a sound element, and correspondingly, the terminal plays the sound element of the selected avatar in the process of displaying the effect of the selected avatar in the second display area, so that the user experience is more facilitated, the avatar is set as the target avatar, and the message prompt effect of the avatar is obtained when the communication message is received, thereby the user can select the favorite avatar more conveniently.
FIG. 4 is a schematic diagram of a selection interface. Referring to fig. 4, a plurality of avatars, for example, an avatar of bell, an avatar of deer, etc., are displayed in a first display area 401 of the selection interface. Wherein, the upper part of the second avatar in the category of the self-defined avatar has a selection box, and the selection box also displays a self-defined character pattern to show that the avatar is the currently selected avatar. The second display area 402 has: the selected avatar 404 hovers the displayed effect in the desktop 403.
302. And the terminal responds to the editing instruction operation of the first virtual image and displays an editing interface, wherein the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image.
Optionally, the operation of editing the first avatar is any operation, for example, the operation of triggering the editing indication option corresponding to the first avatar in the selection interface is not limited in this embodiment of the application.
The first avatar includes a plurality of elements, such as a first body element, and correspondingly, the editing interface includes a body editing control for editing the first body element. For another example, the first avatar includes a first decorative element, and correspondingly, the editing interface includes a decorative editing control for editing the first decorative element. For another example, the first avatar includes a first sound element and, correspondingly, the editing interface includes a sound editing control for editing the first sound element. The first decoration element is used for decorating the first main body element, and the first main body element and the first decoration element form a picture of the first virtual image. The first sound element is used for being synchronously played as background audio when the first virtual image is displayed in a suspended mode on the terminal, and the message prompt effect of the first virtual image is enhanced.
It should be noted that the editing controls included in the editing interface are only exemplary illustrations, and actually, the editing interface can display any one or more editing controls, which is not limited in this embodiment of the application.
Optionally, in the editing interface, above the editing control, there are displayed: a display effect of the first avatar upon receipt of the communication message. Optionally, in a case that the first avatar is a dynamic avatar, a pause control is further displayed around the first avatar, and in a process of dynamically displaying the first avatar, the terminal freezes the first avatar in a currently displayed state in response to a triggering operation on the pause control.
Fig. 5 is a schematic diagram of an editing interface. Referring to fig. 5, a body editing control 501 and a sound editing control 502 are included in the editing interface. The body editing control 501 displays a prompt message "select body element" and a first body element in the first avatar to prompt the user to edit the first body element. The sound editing control 502 displays a prompt message "default sound element" indicating that the first sound element in the current first avatar is a default sound element and is not edited by the user. The sound editing control is also displayed with prompt information of selecting sound elements to prompt the user to edit the first sound elements. With continued reference to FIG. 5, the body editing control 501 is displayed above: a display effect of the first avatar upon receipt of the communication message. And a pause control is also displayed below the first virtual image, and the pause control displays prompt information 'effect preview'.
303. And the terminal responds to the triggering operation of the main body editing control to acquire a second main body element.
In a possible implementation manner, the obtaining, by the terminal, the second body element in response to the trigger operation on the body editing control includes: the terminal responds to the triggering operation of the main body editing control, an image library is displayed, the image library comprises at least one image, and the terminal acquires a second main body element based on the image selected from the image library; or the terminal responds to the triggering operation of the main body editing control, displays a video library, wherein the video library comprises at least one video, and acquires the second main body element based on the video selected from the video library.
Wherein, the image library includes: at least one of images local to the terminal or images acquired by the terminal from the server, wherein the video library comprises: and at least one of a video local to the terminal or a video acquired by the terminal from the server. Optionally, the image in the image library is a still image, a GIF (Graphics Interchange Format) image, or another type of image, which is not limited in this embodiment of the present application.
In a possible implementation manner, the acquiring, by the terminal, the second main element based on the image selected from the image library includes: the terminal displays an image editing interface, the image editing interface comprises a mask, the mask comprises a transparent area and an opaque area, the shape and the size of the transparent area are the same as those of a first main body element in the first virtual image, and the mask covers the image; and the terminal intercepts a target image based on the transparent area, the target image is formed by an image area positioned in the transparent area in the image, and the terminal determines the target image as a second main element.
In the embodiment of the application, the mask is displayed above the image selected by the user, and the target image in the transparent area of the mask in the image is intercepted, so that on one hand, the target image with the same shape and size as those of the first main body element in the first virtual image can be intercepted by using the mask, and on the other hand, the user can conveniently know the shape, size and image content of the target image to be used as the second main body element.
Optionally, before the terminal intercepts the target image based on the transparent area, the image area of the image exposed from the transparent area of the mask is adjusted based on the touch operation on the transparent area, and the image area is used for forming the target image, so that the method enables the user to intercept the favorite image area from the selected image to form the second main element, and improves the flexibility of customizing the virtual image by the user.
In a possible implementation manner, the acquiring, by the terminal, the second body element based on the video selected from the video library includes: the terminal displays a video editing interface, the video editing interface comprises an intercepting control and a mask, the mask comprises a transparent area and an opaque area, the shape and the size of the transparent area are the same as those of a first main body element in the first virtual image, and the mask covers the video; the terminal intercepts a video clip based on the time period and the transparent area indicated by the intercepting control, wherein the video clip is composed of image areas which are positioned in the transparent area in the video within the time period; the terminal determines the video clip as the second body element. Optionally, the duration of the time period indicated by the interception control is not greater than the target duration, and optionally, the target duration is the duration of the dynamic effect of the first avatar.
Optionally, the terminal intercepts the video clip based on the time period and the transparent area indicated by the interception control, and includes: and the terminal cuts the image area of the transparent area from each frame of image indicated by the time period in the video, and the image area cut from each frame of image forms the video clip.
In the embodiment of the application, the mask is displayed above the video selected by the user, and the image area in the transparent area of the mask within the time period indicated by the capture control in the video is captured, so that on one hand, the capture control and the mask can be utilized to ensure that the shape and the size of each frame of image in the video clip captured from the video are the same as those of the first main body element, and on the other hand, the user can conveniently know the shape, the size and the content of the image in the video clip to be used as the second main body element.
In a possible implementation manner, before the terminal intercepts the video segment based on the time period and the transparent area indicated by the interception control, the time period indicated by the interception control is determined based on the touch operation on the interception control.
Optionally, the capture control is located on a video track corresponding to the video, and the terminal determines the time period indicated by the capture control by using the video track. That is, the terminal determines the video track between the two ends of the intercepting control based on the touch operation on the intercepting control, and determines the time period corresponding to the video track between the two ends of the intercepting control as the time period indicated by the intercepting control. The method enables the user to freely determine the video track between the two ends of the intercepting control, so that the favorite video clip is selected from the video to be intercepted, and the flexibility of customizing the virtual image by the user is improved. Optionally, the touch operation includes a sliding operation on both ends of the intercepting control.
Optionally, the time period indicated by the interception control is a default time period, for example, the first 2 seconds of the video. Therefore, the user does not need to manually intercept the video clip, the efficiency of intercepting the video clip can be improved, and the efficiency of customizing the virtual image is improved.
Optionally, in the case that the playing time length of the video selected from the video library is shorter than the target time length, the capture control is not displayed in the video editing interface. The length of the video selected by the user meets the requirement, so that the video length does not need to be intercepted, and in this case, the interface can be simplified by not displaying the intercepting control in the video editing interface.
Optionally, the video editing interface includes a confirmation interception control, and the terminal intercepts the video segment based on the time period and the transparent region indicated by the interception control in response to a trigger operation on the confirmation interception control.
Optionally, the terminal displays the interception progress information in a suspension manner in the video editing interface in the process of intercepting the video clip based on the time period and the transparent area indicated by the interception control. Due to the fact that the time spent on intercepting the video clips is possibly long, intercepting progress information is displayed in a suspension mode in the video editing interface, the time required by waiting for video intercepting can be conveniently estimated by a user, anxiety of the user caused by waiting is relieved, and therefore user experience is improved. Optionally, in the process of intercepting the video segment based on the time period and the transparent area indicated by the interception control, an interception cancellation control is also displayed in a suspended manner in the video editing interface, so that the user can cancel the video interception operation at any time.
FIG. 6 is a schematic diagram of a display interface of a video library. Referring to fig. 6, a plurality of videos are displayed in the interface, and the time duration of each video is displayed in the lower right corner of the video. The user can select the video from the videos according to the duration of the videos. With continued reference to fig. 6, the interface further includes an image display control, the image display control displays a prompt message "image", the terminal switches the video displayed in the interface to an image in response to the triggering operation on the image display control, and the user can select an image from the interface.
Fig. 7 is a schematic diagram of a video editing interface. Referring to fig. 7, a mask 701 is displayed above the video editing interface, and the mask 701 is overlaid on top of the video. The transparent region in the mask 701 is circular. A video track 702 is displayed below the video editing interface, and an intercepting control 703 is displayed on the video track 702, wherein the time period indicated by the intercepting control 703 is the first 0.7 second of the video. The user can slide both ends (shaded areas in the figure) of the capture control 703 to adjust the time period indicated by the capture control.
Fig. 8 is a schematic diagram of a video editing interface. Referring to fig. 8, the video editing interface includes a confirmation interception control, where the confirmation interception control displays a prompt message "complete", and the terminal intercepts a video segment based on a time period indicated by the interception control 802 and a transparent area of the mask 801 in response to a trigger operation on the confirmation interception control. In the process of intercepting the video clip, the video editing interface of the terminal is displayed with the intercepting progress of the video clip and an intercepting canceling control in a suspending way, and the intercepting canceling control is displayed with prompt information canceling so as to facilitate a user to cancel the video intercepting operation at any time.
304. And the terminal replaces the first main body element in the first virtual image with the second main body element to obtain a second virtual image.
And after the terminal acquires the second main body element, replacing the first main body element in the first virtual image with the second main body element under the condition that other elements in the first virtual image are kept unchanged, so as to obtain a second virtual image.
In a possible implementation manner, in step 303, after the terminal acquires the second main element based on a video selected from the video library, in a case that the video has a corresponding background audio, the terminal displays an audio enabling option in the editing interface. The audio enabling option is used for determining the background audio corresponding to the video as the sound element of the second avatar, and correspondingly, the terminal responds to the triggering operation of the audio enabling option and determines the background audio corresponding to the video as the sound element of the second avatar. Optionally, the terminal determines the background audio corresponding to the video as a sound element of the second avatar, including: the terminal determines the background audio as a second sound element, and replaces the first sound element in the second avatar with the second sound element. In this way, the user can use the selected video to define the main body elements in the virtual image, and can also use the background audio of the video to define the sound elements in the virtual image, so that the personalized requirements of the user can be fully met.
In a possible implementation manner, after the terminal obtains the second avatar, the terminal displays in the editing interface: a display effect of the second avatar upon receipt of the communication message. Therefore, the user can conveniently browse the customized display effect of the second virtual image. Optionally, the terminal plays the sound element of the second avatar during displaying the display effect of the second avatar in the editing interface, which enables the user to feel the message prompt effect of the second avatar in multiple aspects of vision and hearing. Optionally, the terminal displays, in the editing interface, under the target condition: a display effect of the second avatar upon receipt of the communication message, and a sound element of the second avatar is played. In other cases, only the last frame of the dynamic effect of the second avatar is displayed without playing the sound elements of the second avatar. Optionally, the target condition includes that the terminal detects a preview operation, or the terminal has just jumped to the editing interface from another interface, or the second avatar is edited, and the like, which is not limited in this embodiment of the application.
Fig. 9 is a schematic diagram of an editing interface. Referring to fig. 9, above the editing interface are displayed: a display effect of the second avatar upon receipt of the communication message. The editing interface further includes an audio enabling option 901, and a prompt message "use video sound" is displayed on the left side of the audio enabling option 901 to prompt the user to trigger the audio enabling option 901, so that the background audio corresponding to the main body element in the second avatar is determined as the sound element of the second avatar.
In the embodiment of the application, the user can define the main body elements in the virtual image based on the favorite image and can define the main body elements in the virtual image based on the favorite video, so that the style of the customized virtual image is enriched, and the personalized requirements of the user are fully met.
305. And the terminal displays the second virtual image in the selection interface.
In a possible implementation manner, the terminal displays the second avatar in the selection interface, including: and the terminal displays the second avatar in the selected state in the first display area of the selection interface, and displays the effect of the second avatar in the desktop in a suspension manner in the second display area of the selection interface. Since the second avatar is the avatar just created by the user, and the user is most likely to want to set the second avatar as the target avatar, the second avatar is displayed in the first display area of the selection interface in the selected state, and the effect of the suspended display of the second avatar in the desktop is displayed in the second display area of the selection interface, so that the step of selecting the second avatar by the user is saved, the operation of setting the target avatar by the user is simplified, and the operation efficiency is improved.
In a possible implementation manner, in the process that the terminal displays the effect that the second avatar is displayed in a suspended manner in the desktop in the second display area, the sound element of the second avatar is played, so that the user can know the message prompt effect of the second avatar more conveniently.
Optionally, in a case that the first avatar is a user-defined avatar, after the second avatar is displayed in the selection interface, the first avatar is not displayed in the selection interface any more. And under the condition that the first virtual image is the virtual image provided by the target application program, after the second virtual image is displayed in the selection interface, the first virtual image is also displayed in the selection interface. That is, the user edits the first avatar provided by the target application, and the obtained second avatar is saved as a new avatar and then displayed on the selection interface without affecting the display of the original first avatar, and the first avatar is unchanged. And the user edits the self-defined first virtual image, the obtained second virtual image can be stored as the updated first virtual image and then displayed on a selection interface, and the selection interface does not comprise the first virtual image before editing any more.
Optionally, the avatar provided by the target application in the selection interface does not have a corresponding deletion control, and the avatar defined by the user in the selection interface has a corresponding deletion control, through which the user can delete the defined avatar.
306. The terminal sets the second avatar as the target avatar in response to a setting operation of the second avatar.
Optionally, the selection interface includes a setting option corresponding to the second avatar, and correspondingly, the terminal responds to a setting operation on the second avatar to set the second avatar as the target avatar, including: and the terminal responds to the trigger operation of the setting option and sets the second avatar as the target avatar.
In one possible implementation manner, the second avatar is displayed in a selected state in the first display area of the selection interface, and correspondingly, the terminal responds to a setting operation of the second avatar to set the second avatar as the target avatar, including: the terminal sets the second avatar as the target avatar in response to a confirmation operation of the selected second avatar in the first display area.
Optionally, the selection interface includes a confirmation option, and correspondingly, the terminal sets the second avatar as the target avatar in response to a confirmation operation on the selected second avatar in the first display area, including: and the terminal responds to the trigger operation of the confirmation option and sets the selected second avatar in the second display area as the target avatar.
It should be noted that, in the above step 303-304, only editing at least one element in the first avatar based on at least one editing control of the multiple editing controls to obtain one implementation manner of the second avatar, and in other embodiments, the second avatar can be obtained in other manners.
307. And the terminal responds to the received communication message and displays the second virtual image in a floating mode in the current interface.
Optionally, the terminal, in response to receiving the communication message, displays the second avatar in a floating manner in the current interface, including: the terminal responds to the received communication message in the state that the screen is not locked, and a second virtual image is displayed on the upper layer of the desktop in a floating mode; and the terminal responds to the received communication message in the screen locking state, brightens the screen and displays the second virtual image on the upper layer of the screen locking interface in a suspending manner.
Optionally, the terminal, in response to receiving the communication message, displays the second avatar in a floating manner in the current interface, including: and the terminal responds to the received communication message and displays the second virtual image in a floating manner in the current interface until the dynamic effect display of the second virtual image is completed. That is, after the dynamic effect of the second avatar is displayed, the second avatar is not displayed on the interface any more, so as to avoid occupying the screen for a long time, thereby avoiding interfering with the user operation.
In a possible implementation manner, the selection interface includes an option corresponding to the first prompt type and an option corresponding to the second prompt type, and the terminal can also set the message prompt type based on the option corresponding to the first prompt type and the option corresponding to the second prompt type. That is, the terminal determines that the message prompt type is the first prompt type in response to the trigger operation of the option corresponding to the first prompt type. Wherein, the first prompt type is: the target avatar is displayed in suspension upon receipt of the communication message, but the sound elements in the target avatar are not played. Or the terminal responds to the triggering operation of the option corresponding to the second prompt type and determines that the message prompt type is the second prompt type. Wherein the second prompt type is: displaying the target avatar in suspension upon receipt of the communication message, and playing the sound elements in the target avatar.
In the embodiment of the application, in the case that the message prompt type is the first prompt type and the second avatar is the target avatar, the terminal, in response to receiving the communication message, displays the second avatar in a floating manner on the current interface, but does not play the sound element of the second avatar. And under the condition that the message prompt type is a second prompt type and the second avatar is the target avatar, the terminal responds to the received communication message, displays the second avatar in a current interface in a floating manner, and plays the sound element of the second avatar.
In the embodiment of the application, the options corresponding to the first prompt type and the options corresponding to the second prompt type are displayed in the selection interface, so that multiple message prompt types are provided for a user, the user can freely set a prompt mode of a communication message according to needs, and the user viscosity can be improved.
In a possible implementation manner, the selection interface includes a do-not-disturb option, the terminal can also start a do-not-disturb function based on the do-not-disturb option, and correspondingly, the terminal responds to a trigger operation of the do-not-disturb option and starts the do-not-disturb function. Wherein, the function of avoiding disturbing means: in the silent mode, the target avatar is displayed in suspension upon receipt of the communication message, but the sound elements in the target avatar are not played. Therefore, in the embodiment of the application, whether the terminal plays the sound element in the target avatar when receiving the communication message is not only related to the message prompt type set by the user, but also related to whether the user starts the do-not-disturb function. In fact, in the case that the message prompt type set by the user is the first prompt type, no matter whether the user turns on the do-not-disturb function, the terminal does not play the sound element in the target avatar when receiving the communication message. Under the condition that the message prompt type set by the user is the second prompt type and the disturbance-free function is not started, the terminal plays the sound element in the target virtual image when receiving the communication message, and under the condition that the disturbance-free function is started, the terminal judges whether the terminal is in a mute mode or not firstly when receiving the communication message, if the terminal is in the mute mode, the sound element cannot be played, and if the terminal is in the non-mute mode, the sound element can be played.
In the embodiment of the application, under the condition that the message prompt type is the second prompt type, the second avatar is the target avatar, and the do-not-disturb function is turned on, the terminal responds to the received communication message and is in a non-silent mode, the second avatar is displayed in a floating mode on the current interface, and the sound element in the second avatar is played.
In the embodiment of the application, the user can start the disturbance-free function as required by setting the disturbance-free option, so that the terminal can not play sound elements of the virtual image when receiving the communication message under the condition of being in the silent mode, and the user experience can be improved.
Optionally, the terminal, in response to receiving the communication message, suspends the second avatar before displaying in the current interface, and in response to the permission opening operation, opens the display permission of the second avatar. After the user opens the display authority of the second virtual image, the terminal can float and display the second virtual image in the current interface when receiving the communication message.
With continued reference to fig. 4, the selection interface further includes an option 405 corresponding to the first prompt type, an option 406 corresponding to the second prompt type, and an option 407 of avoiding disturbance. The right side of the option 405 corresponding to the first prompt type displays the prompt message "only set the avatar" to prompt the user that, when the user selects the option, the terminal receives the communication message, only displays the avatar without playing the sound element of the avatar. The right side of the option 406 corresponding to the second prompt type displays the prompt message "set avatar and sound" to prompt the user that when the user selects the option, the terminal not only displays the avatar but also plays the sound elements of the avatar when receiving the communication message. The left side of the do not disturb option 407 displays a prompt message "turn off animation sound while silent" to prompt the user that when the user selects the option, the terminal will not play the sound element in the avatar regardless of the set message prompt type if in silent mode when receiving the communication message.
In a possible implementation manner, before the terminal displays at least one alternative virtual image in the selection interface, the terminal displays a virtual image setting interface, wherein the virtual image setting interface comprises a plurality of application identifiers; and the terminal responds to the selection operation of the target application identifier corresponding to the target application program and displays a selection interface corresponding to the target application identifier, wherein the selection interface is used for selecting the target virtual image which is displayed in a floating mode when the communication message of the target application program is received.
In the embodiment of the application, a plurality of application identifiers are displayed in the avatar setting interface, and when a user triggers a certain application identifier, the user enters a selection interface corresponding to the application identifier, and an avatar is set for an application program corresponding to the application identifier in the selection interface. Optionally, the user sets different avatars for different application programs, and the terminal displays the avatar corresponding to a certain application program when receiving a communication message of the application program. Under the condition, the user can know which application program the currently received communication message belongs to through the virtual image displayed in a floating mode, and the message prompting effect is improved.
The embodiment of the application provides a novel message prompting mode, which can adopt a mode of displaying the virtual image in a suspending way to prompt messages, so that a user can know that the terminal receives communication messages through the virtual image displayed in a suspending way, the message prompting mode is flexible and interesting, and the prompting effect is enhanced. In addition, the embodiment of the application allows the user to define the virtual image by user, can provide the alternative virtual image for the user, enables the user to select the alternative virtual image, further edits the virtual image to obtain the defined virtual image, and fully meets the individual requirements of the user.
Fig. 10 is a flowchart of an avatar processing method according to an embodiment of the present application. In this embodiment, the sound elements in the avatar can be customized. Referring to fig. 10, the embodiment includes:
1001. the terminal displays at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when the communication message is received.
1002. And the terminal responds to the editing instruction operation of the first virtual image and displays an editing interface, wherein the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image.
The steps 1001 and 1002 are the same as the steps 301 and 302, and are not described herein again.
1003. And the terminal responds to the triggering operation of the sound editing control to obtain a second sound element.
The terminal responds to the triggering operation of the sound editing control to obtain a second sound element, and the method comprises the following steps: the terminal responds to the triggering operation of the sound editing control and displays an audio library, wherein the audio library comprises at least one audio; and the terminal acquires the second sound element based on the audio selected from the audio library. Wherein, the audio frequency storehouse includes: the terminal may obtain at least one of the local audio of the terminal or the audio obtained by the terminal from the server, which is not limited in this embodiment of the application.
The terminal obtains a second sound element based on the audio selected from the audio library, and the method comprises the following steps: the terminal displays an intercepting control in an editing interface; the terminal intercepts an audio clip indicated by the intercepting control from audio based on touch operation on the intercepting control; the audio piece is determined to be the second sound element. Optionally, the duration of the audio segment is not greater than the target duration. Optionally, the target duration is a duration of the dynamic effect of the first avatar.
Optionally, the capture control is displayed on the audio track corresponding to the audio, and the terminal captures an audio clip using the audio track, that is, the terminal determines the audio track between two ends of the capture control based on the touch operation on the capture control, and captures the audio clip corresponding to the audio track between two ends of the capture control from the audio.
Wherein, the both ends of intercepting controlling part can slide, and the user can slide the both ends of intercepting controlling part, adjusts the audio track between the both ends of intercepting controlling part. Correspondingly, the terminal determines the audio track between the two ends of the intercepting control based on the touch operation on the intercepting control, and the method comprises the following steps: and the terminal determines the audio track between the two ends of the intercepting control based on the sliding operation of the two ends of the intercepting control. The mode enables the user to select the favorite audio clips from the audio for interception, and improves the flexibility of customizing the virtual image by the user.
Optionally, the audio segment indicated by the clipping control defaults to a target segment in the audio, for example, the target segment is the audio segment of the first 2 seconds. Therefore, the user does not need to manually intercept the audio clip, the efficiency of intercepting the audio clip can be improved, and the efficiency of customizing the virtual image is improved.
Optionally, in the case that the playing time length of the audio selected from the audio library is shorter than the target time length, the interception control is not displayed in the audio editing interface. Because the length of the audio selected by the user meets the requirement, the audio length does not need to be intercepted, and in this case, the editing interface can be simplified without displaying the intercepting control in the editing interface.
1004. And the terminal replaces the first sound element in the first virtual image with the second sound element to obtain a second virtual image.
And after the terminal acquires the second sound element, replacing the first sound element in the first virtual image with the second sound element to obtain a second virtual image under the condition that other elements in the first virtual image are kept unchanged.
It should be noted that, in the above step 1003-1004, only editing at least one element in the first avatar based on at least one editing control of the multiple editing controls to obtain one implementation manner of the second avatar, and in other embodiments, the second avatar can be obtained in other manners.
1005. And the terminal displays the second virtual image in the selection interface.
1006. The terminal sets the second avatar as the target avatar in response to a setting operation of the second avatar.
1007. And the terminal responds to the received communication message and displays the second virtual image in a floating mode in the current interface.
The implementation of step 1004-.
In the embodiment of the application, the user can define the sound elements in the virtual image based on the favorite audio to obtain the virtual image which accords with the preference of the user, and the virtual image is set as the target virtual image for message prompt, so that the personalized requirements of the user are fully met.
Fig. 11 is a flowchart of an avatar processing method according to an embodiment of the present application. In this embodiment, the decorative elements in the avatar can be customized. Referring to fig. 11, the embodiment includes:
1101. the terminal displays at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when the communication message is received.
1102. And the terminal responds to the editing instruction operation of the first virtual image and displays an editing interface, wherein the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image.
The steps 1101-1102 are the same as the steps 301-302, and are not described herein again.
1103. And the terminal responds to the triggering operation of the decoration editing control to acquire a second decoration element.
In a possible implementation manner, the obtaining, by the terminal, the second decoration element in response to the trigger operation on the decoration editing control includes: the terminal responds to the triggering operation of the decoration editing control and displays a decoration library, wherein the decoration library comprises at least one decoration element; and the terminal determines the decoration element selected from the decoration library as a second decoration element. Wherein, decorate the storehouse and include: the terminal may obtain at least one of the decoration elements locally or the decoration elements obtained by the terminal from the server, which is not limited in this embodiment of the application.
1104. And the terminal replaces the first decorative element in the first virtual image with the second decorative element to obtain a second virtual image.
And after the terminal acquires the second decoration element, replacing the first decoration element in the first virtual image with the second decoration element under the condition that other elements in the first virtual image are kept unchanged, so as to obtain a second virtual image.
It should be noted that, in the above step 1103-.
1105. And the terminal displays the second virtual image in the selection interface.
1106. The terminal sets the second avatar as the target avatar in response to a setting operation of the second avatar.
1107. And the terminal responds to the received communication message and displays the second virtual image in a floating mode in the current interface.
The implementation of steps 1104-1107 are the same as the implementation of steps 304-307, and will not be described herein again.
In the embodiment of the application, the user can define the decoration elements in the virtual image based on the favorite decoration elements to obtain the virtual image which accords with the preference of the user, and the virtual image is set as the target virtual image for message prompt, so that the personalized requirements of the user are fully met.
It should be noted that the above embodiments can be combined in any manner, for example, when the terminal edits the first avatar, the terminal edits not only the first body element in the first avatar but also the first sound element in the first avatar. Of course, the above embodiments can also be combined in other ways, and the present application does not limit this.
Fig. 12 is a block diagram of an avatar processing apparatus according to an embodiment of the present application. Referring to fig. 12, the apparatus includes:
a first display module 1201 configured to display at least one candidate avatar in a selection interface, where the selection interface is used to select a target avatar to be displayed in a floating manner when a communication message is received;
an avatar editing module 1202 configured to obtain a second avatar based on an editing operation on the displayed first avatar;
a second display module 1203 configured to display a second avatar in the selection interface;
an avatar setting module 1204 configured to set the second avatar as the target avatar in response to a setting operation on the second avatar.
In one possible implementation, the first avatar includes a first body element, a first decorative element, and a first sound element, the avatar editing module 1202 configured to perform at least one of:
responding to the main body editing operation of the first virtual image, editing the first main body element to obtain a second virtual image;
responding to the decoration editing operation of the first virtual image, editing the first decoration element to obtain a second virtual image;
and responding to the sound editing operation of the first virtual image, editing the first sound element to obtain a second virtual image.
In one possible implementation, the first avatar includes a plurality of elements, and the avatar editing module 1202 includes:
the control display sub-module is configured to respond to an editing instruction operation on the first virtual image and display an editing interface, and the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image;
and the image editing sub-module is configured to edit at least one element in the first avatar based on at least one editing control in the multiple editing controls to obtain a second avatar.
In one possible implementation, the first avatar includes a first body element, and the avatar editing sub-module includes:
a main body element obtaining unit configured to obtain a second main body element in response to a trigger operation on a main body editing control;
and the main body element replacing unit is configured to replace the first main body element in the first avatar with the second main body element to obtain a second avatar.
In one possible implementation manner, the subject element obtaining unit includes:
the first acquiring subunit is configured to respond to a triggering operation on the main body editing control, display an image library, wherein the image library comprises at least one image, and acquire a second main body element based on an image selected from the image library; alternatively, the first and second electrodes may be,
and the second acquiring subunit is configured to respond to the triggering operation of the main body editing control, display a video library, wherein the video library comprises at least one video, and acquire a second main body element based on the video selected from the video library.
In one possible implementation, the first obtaining subunit is configured to display an image editing interface, the image editing interface includes a mask, the mask includes a transparent area and an opaque area, the shape and size of the transparent area are the same as those of the first main body element in the first avatar, and the mask covers the image; intercepting a target image based on the transparent area, wherein the target image is formed by an image area positioned in the transparent area in the image; the target image is determined as the second subject element.
In one possible implementation manner, the second obtaining subunit is configured to display a video editing interface, the video editing interface includes an intercepting control and a mask, the mask includes a transparent area and an opaque area, the shape and size of the transparent area are the same as those of the first main body element in the first avatar, and the mask is covered above the video; intercepting a video clip based on the time period and the transparent area indicated by the intercepting control, wherein the video clip is composed of image areas which are positioned in the transparent area in the video within the time period; the video clip is determined as a second body element.
In a possible implementation manner, the second obtaining subunit is further configured to determine, based on the touch operation on the intercepting control, a time period indicated by the intercepting control.
In one possible implementation, the first avatar includes a first sound element, and the avatar editing sub-module includes:
a sound element acquisition unit configured to acquire a second sound element in response to a trigger operation on the sound editing control;
a sound element replacing unit configured to replace a first sound element in the first avatar with a second sound element, resulting in a second avatar.
In one possible implementation, the sound element obtaining unit includes:
the display subunit is configured to respond to the triggering operation of the sound editing control and display an audio library, and the audio library comprises at least one audio;
a third obtaining subunit configured to obtain the second sound element based on the audio selected from the audio library.
In a possible implementation manner, the third obtaining subunit is configured to display an intercepting control in the editing interface; intercepting an audio clip indicated by the intercepting control from audio based on touch operation on the intercepting control; the audio piece is determined to be the second sound element.
In one possible implementation, the control display sub-module is further configured to display an audio enable option in the editing interface if the video has corresponding background audio, where the audio enable option is used to determine the background audio corresponding to the video as a sound element of the second avatar.
In one possible implementation, the character editing module 1202 further includes:
an effect display sub-module configured to display in the editing interface: a display effect of the second avatar upon receipt of the communication message.
In one possible implementation manner, the selection interface includes a first display area and a second display area, the first display area displays at least one candidate avatar, and the second display area displays: the effect of the selected avatar is displayed suspended in the desktop.
In one possible implementation, the second display module 1203 is configured to display the second avatar in the selected state in the first display area;
an avatar setting module 1204 configured to set the second avatar as the target avatar in response to a confirmation operation of the selected second avatar in the first display region.
In one possible implementation, the selected avatar includes a sound element, the apparatus further comprising:
and the audio playing module is configured to play the sound elements of the selected avatar in the process of displaying the effect of the selected avatar in the second display area.
In one possible implementation, the apparatus further includes:
a third display module configured to hover display the second avatar in the current interface in response to receiving the communication message.
In a possible implementation manner, the selection interface includes an option corresponding to the first prompt type and an option corresponding to the second prompt type, and the apparatus further includes:
the first determining module is configured to determine that the message prompt type is a first prompt type in response to a trigger operation on an option corresponding to the first prompt type, where the first prompt type is: displaying the target avatar in suspension upon receiving the communication message, but not playing the sound elements in the target avatar; alternatively, the first and second electrodes may be,
a second determining module, configured to determine, in response to a trigger operation on an option corresponding to a second prompt type, that the message prompt type is the second prompt type, where the second prompt type is: displaying the target avatar in suspension upon receipt of the communication message, and playing the sound elements in the target avatar.
In one possible implementation, the selection interface includes a do-not-disturb option, and the apparatus further includes: a function starting module configured to start a do-not-disturb function in response to a trigger operation on the do-not-disturb option, where the do-not-disturb function is: in the silent mode, the target avatar is displayed in suspension upon receipt of the communication message, but the sound elements in the target avatar are not played.
In one possible implementation, the first display module 1201 is further configured to display an avatar setting interface, where the avatar setting interface includes a plurality of application identifiers; and responding to the selection operation of the target application identifier corresponding to the target application program, and displaying a selection interface corresponding to the target application identifier, wherein the selection interface is used for selecting the target virtual image which is displayed in a floating manner when the communication message of the target application program is received.
The embodiment of the application provides a novel message prompting mode, which can adopt a mode of displaying the virtual image in a suspending way to prompt messages, so that a user can know that the terminal receives communication messages through the virtual image displayed in a suspending way, the message prompting mode is flexible and interesting, and the prompting effect is enhanced. In addition, the embodiment of the application allows the user to define the virtual image by user, can provide the alternative virtual image for the user, enables the user to select the alternative virtual image, further edits the virtual image to obtain the defined virtual image, and fully meets the individual requirements of the user.
It should be noted that: the avatar processing apparatus provided in the above embodiment is only illustrated by the division of the above functional modules when processing an avatar, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the above described functions. In addition, the avatar processing apparatus provided in the above embodiments and the avatar processing method embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments, and are not described herein again.
Fig. 13 shows a block diagram of a terminal 1300 according to an exemplary embodiment of the present application. The terminal 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one program code for execution by processor 1301 to implement the avatar processing method provided by method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 is gradually increasing.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The embodiment of the present application further provides a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations executed in the avatar processing method of the above embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are loaded and executed by the processor to implement the operations performed in the avatar processing method of the above-described embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (16)

1. An avatar processing method, the method comprising:
displaying at least one alternative virtual image in a selection interface, wherein the selection interface is used for selecting a target virtual image which is displayed in a suspension mode when a communication message is received;
obtaining a second avatar based on editing operation of the displayed first avatar;
displaying the second virtual image in the selection interface;
setting the second avatar as the target avatar in response to a setting operation of the second avatar.
2. The method of claim 1, wherein the first avatar includes a first body element, a first decoration element, and a first sound element, and wherein the obtaining of the second avatar based on the editing operation of the displayed first avatar includes at least one of:
responding to the main body editing operation of the first virtual image, editing the first main body element to obtain a second virtual image;
responding to the decoration editing operation of the first virtual image, editing the first decoration element to obtain a second virtual image;
and responding to the sound editing operation of the first virtual image, editing the first sound element to obtain the second virtual image.
3. The method of claim 1, wherein the first avatar includes a plurality of elements, and wherein obtaining the second avatar based on an editing operation on the displayed first avatar comprises:
responding to an editing instruction operation of the first virtual image, and displaying an editing interface, wherein the editing interface comprises a plurality of editing controls, and each editing control is used for editing one element in the first virtual image;
and editing at least one element in the first virtual image based on at least one editing control in the multiple editing controls to obtain the second virtual image.
4. The method of claim 3, wherein the first avatar includes a first body element, and wherein editing at least one element of the first avatar based on at least one of the plurality of editing controls to obtain the second avatar comprises:
responding to the triggering operation of the main body editing control, and acquiring a second main body element;
and replacing the first main body element in the first virtual image with the second main body element to obtain the second virtual image.
5. The method of claim 4, wherein obtaining the second body element in response to the triggering operation on the body editing control comprises:
responding to the triggering operation of the main body editing control, displaying an image library, wherein the image library comprises at least one image, and acquiring the second main body element based on the image selected from the image library; alternatively, the first and second electrodes may be,
and responding to the triggering operation of the main body editing control, displaying a video library, wherein the video library comprises at least one video, and acquiring the second main body element based on the video selected from the video library.
6. The method of claim 3, wherein the first avatar includes a first sound element, and wherein editing at least one element of the first avatar based on at least one of the plurality of editing controls to obtain the second avatar comprises:
responding to the triggering operation of the sound editing control, and acquiring a second sound element;
and replacing the first sound element in the first virtual image with the second sound element to obtain the second virtual image.
7. The method of claim 5, wherein after obtaining the second body element based on the video selected from the video library, the method further comprises:
and displaying an audio enabling option in the editing interface under the condition that the video has the corresponding background audio, wherein the audio enabling option is used for determining the background audio corresponding to the video as the sound element of the second virtual image.
8. The method of claim 3, wherein after editing at least one element of the first avatar based on at least one of the plurality of editing controls to obtain the second avatar, the method further comprises:
displaying in the editing interface: a display effect of the second avatar upon receipt of the communication message.
9. The method of claim 1, wherein the selection interface comprises a first display area and a second display area, the first display area displaying the at least one alternative avatar, the second display area displaying: the effect of the selected avatar is displayed suspended in the desktop.
10. The method of claim 9, wherein displaying the second avatar in the selection interface comprises:
displaying the second avatar in the selected state in the first display area;
the setting the second avatar as the target avatar in response to a setting operation of the second avatar includes:
setting the second avatar as the target avatar in response to a confirmation operation of the selected second avatar in the first display area.
11. The method of claim 9, wherein the selected avatar includes a sound element, the method further comprising:
and playing the sound elements of the selected avatar in the process of displaying the effect of the selected avatar in the second display area.
12. The method according to any one of claims 1-11, wherein after setting the second avatar as the target avatar in response to a setting operation on the second avatar, the method further comprises:
in response to receiving the communication message, the second avatar is hovered displayed in a current interface.
13. The method according to any of claims 1-11, wherein prior to displaying at least one alternative avatar in the selection interface, the method further comprises:
displaying an avatar setting interface, wherein the avatar setting interface comprises a plurality of application identifiers;
and responding to the selection operation of the target application identifier corresponding to the target application program, and displaying the selection interface corresponding to the target application identifier, wherein the selection interface is used for selecting the target virtual image which is displayed in a floating manner when the communication message of the target application program is received.
14. An avatar processing apparatus, the apparatus comprising:
the first display module is configured to display at least one alternative virtual image in a selection interface, and the selection interface is used for selecting a target virtual image which is displayed in a floating mode when a communication message is received;
an image editing module configured to obtain a second avatar based on an editing operation on the displayed first avatar;
the second display module is configured to display the second virtual image in the selection interface;
an avatar setting module configured to set the second avatar as the target avatar in response to a setting operation on the second avatar.
15. A terminal, characterized in that the terminal comprises a processor and a memory, the memory having stored therein at least one instruction, the instruction being loaded and executed by the processor to implement the operation performed by the avatar processing method according to any one of claims 1 to 13.
16. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the avatar processing method of any of claims 1-13.
CN202110736829.8A 2021-06-30 2021-06-30 Virtual image processing method, device, terminal and storage medium Pending CN113325983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110736829.8A CN113325983A (en) 2021-06-30 2021-06-30 Virtual image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110736829.8A CN113325983A (en) 2021-06-30 2021-06-30 Virtual image processing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113325983A true CN113325983A (en) 2021-08-31

Family

ID=77423651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110736829.8A Pending CN113325983A (en) 2021-06-30 2021-06-30 Virtual image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113325983A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113805701A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Method for determining virtual image display range, electronic device and storage medium
CN113900751A (en) * 2021-09-29 2022-01-07 平安普惠企业管理有限公司 Method, device, server and storage medium for synthesizing virtual image
CN115314728A (en) * 2022-07-29 2022-11-08 北京达佳互联信息技术有限公司 Information display method, system, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1344084A (en) * 2000-09-12 2002-04-10 松下电器产业株式会社 Media editing method and device thereof
US20060059107A1 (en) * 2000-03-30 2006-03-16 Kevin Elmore System and method for establishing eletronic business systems for supporting communications servuces commerce
CN106817349A (en) * 2015-11-30 2017-06-09 厦门幻世网络科技有限公司 A kind of method and device for making communication interface produce animation effect in communication process
CN108008887A (en) * 2017-11-29 2018-05-08 维沃移动通信有限公司 A kind of method for displaying image and mobile terminal
CN111857897A (en) * 2019-04-25 2020-10-30 北京小米移动软件有限公司 Information display method and device and storage medium
CN111884913A (en) * 2020-07-23 2020-11-03 广州酷狗计算机科技有限公司 Message prompting method, device, terminal and storage medium
CN112764869A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Work editing prompting method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059107A1 (en) * 2000-03-30 2006-03-16 Kevin Elmore System and method for establishing eletronic business systems for supporting communications servuces commerce
CN1344084A (en) * 2000-09-12 2002-04-10 松下电器产业株式会社 Media editing method and device thereof
CN106817349A (en) * 2015-11-30 2017-06-09 厦门幻世网络科技有限公司 A kind of method and device for making communication interface produce animation effect in communication process
CN108008887A (en) * 2017-11-29 2018-05-08 维沃移动通信有限公司 A kind of method for displaying image and mobile terminal
CN111857897A (en) * 2019-04-25 2020-10-30 北京小米移动软件有限公司 Information display method and device and storage medium
CN111884913A (en) * 2020-07-23 2020-11-03 广州酷狗计算机科技有限公司 Message prompting method, device, terminal and storage medium
CN112764869A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Work editing prompting method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113805701A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Method for determining virtual image display range, electronic device and storage medium
CN113900751A (en) * 2021-09-29 2022-01-07 平安普惠企业管理有限公司 Method, device, server and storage medium for synthesizing virtual image
CN115314728A (en) * 2022-07-29 2022-11-08 北京达佳互联信息技术有限公司 Information display method, system, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN109982102B (en) Interface display method and system for live broadcast room, live broadcast server and anchor terminal
CN110971930A (en) Live virtual image broadcasting method, device, terminal and storage medium
CN110300274B (en) Video file recording method, device and storage medium
CN108897597B (en) Method and device for guiding configuration of live broadcast template
CN113325983A (en) Virtual image processing method, device, terminal and storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN110602321A (en) Application program switching method and device, electronic device and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN108900925B (en) Method and device for setting live broadcast template
CN112565911B (en) Bullet screen display method, bullet screen generation device, bullet screen equipment and storage medium
CN111741366A (en) Audio playing method, device, terminal and storage medium
CN114116053A (en) Resource display method and device, computer equipment and medium
CN110662105A (en) Animation file generation method and device and storage medium
CN111565338A (en) Method, device, system, equipment and storage medium for playing video
CN109800003B (en) Application downloading method, device, terminal and storage medium
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN112822544B (en) Video material file generation method, video synthesis method, device and medium
CN111954058B (en) Image processing method, device, electronic equipment and storage medium
CN112118353A (en) Information display method, device, terminal and computer readable storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN112419143A (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN111884913B (en) Message prompting method, device, terminal and storage medium
CN108228052B (en) Method and device for triggering operation of interface component, storage medium and terminal
CN108966026B (en) Method and device for making video file

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination