CN115220613A - Event prompt processing method, device, equipment and medium - Google Patents

Event prompt processing method, device, equipment and medium Download PDF

Info

Publication number
CN115220613A
CN115220613A CN202110412161.1A CN202110412161A CN115220613A CN 115220613 A CN115220613 A CN 115220613A CN 202110412161 A CN202110412161 A CN 202110412161A CN 115220613 A CN115220613 A CN 115220613A
Authority
CN
China
Prior art keywords
target
event
virtual image
action
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110412161.1A
Other languages
Chinese (zh)
Inventor
郭畅
杨健婷
张怀
黄自力
龙辉
毛葭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110412161.1A priority Critical patent/CN115220613A/en
Publication of CN115220613A publication Critical patent/CN115220613A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Abstract

The embodiment of the application discloses an event prompt processing method, device, equipment and medium, wherein the method comprises the following steps: displaying an information service interface; displaying a target virtual image in an information service interface; and responding to the acquired target prompt event, and controlling the target virtual image to execute an event reminding action corresponding to the target prompt event. By adopting the method and the device, the user can be guided to pay attention to the target prompt event, and the attention of the user to the target prompt event is improved.

Description

Event prompt processing method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an event prompt processing method, an event prompt processing apparatus, an event prompt processing device, and a computer-readable storage medium.
Background
Content which needs to be concerned by a user often appears in the process of using the terminal; for example, in using any application running in the terminal, a new message is received that requires the attention of the user. Practice shows that the current prompting method for prompting the user to pay attention to the new message is direct and single, for example: and directly displaying the new message in the interface so as to prompt the user to pay attention to the new message. Therefore, the existing prompting mode has weak interactivity with the user, so that the user has low attention to the new message, and the omission of the new message is easily caused.
Disclosure of Invention
The embodiment of the application provides an event prompt processing method, device, equipment and medium, which can guide a user to pay attention to a target prompt event and improve the attention of the user to the target prompt event.
In one aspect, an embodiment of the present application provides an event prompt processing method, where the method includes:
displaying an information service interface;
displaying a target virtual image in an information service interface;
and responding to the acquired target prompt event, and controlling the target virtual image to execute an event reminding action corresponding to the target prompt event.
On the other hand, an embodiment of the present application provides an event prompt processing apparatus, including:
the display unit is used for displaying an information service interface;
the display unit is also used for displaying a target virtual image in the information service interface;
and the processing unit is used for responding to the acquired target prompting event and controlling the target virtual image to execute an event reminding action corresponding to the target prompting event.
In one implementation, the processing unit is configured to, when controlling the target avatar to execute an event reminding action corresponding to the target reminding event, specifically:
controlling the target virtual image to be adjusted from the current posture to a target posture so as to enable the target virtual image to execute an event reminding action corresponding to the target reminding event;
wherein, the current posture means: when a target prompt event is acquired, the posture of the target virtual image is located; the target pose is: and determining the posture according to the target prompt event.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit is configured to, when controlling the target avatar to be adjusted from the current pose to the target pose, specifically:
acquiring a first position coordinate of the new prompt element in the information service interface and a second position coordinate of the target virtual image in the information service interface;
calculating a target orientation relation between the target avatar and the new prompt element according to the first position coordinate and the second position coordinate; the target orientation relationship indicates: the new prompt element is positioned in the target direction of the target virtual image;
controlling the target virtual image to be adjusted from the current posture to the target posture according to the target azimuth relation; the target pose includes: the orientation of one or more body parts in the target avatar is in a pose matching the target direction.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit is used for controlling the target virtual image to execute an event reminding action corresponding to the target reminding event, and is specifically used for:
moving the target virtual image from the current position to the target position so that the target virtual image executes an event reminding action corresponding to the target reminding event;
wherein, the current position refers to: when a target prompt event is obtained, the position of the target virtual image is located; the target position is: and the position of the new prompt element in the information service interface.
In one implementation, the processing unit is further configured to:
and in the process of controlling the target virtual image to execute and event reminding action, or within a preset time length after the target virtual image executes the event reminding action, if the target virtual image is triggered, outputting an event detail interface of the target reminding event.
In one implementation, the processing unit is further configured to:
under the condition that the target virtual image does not execute the event reminding action, if the target virtual image is detected to be triggered, outputting an event identification list; the event identification list includes: one or more event identifications corresponding to historical prompting events, wherein the historical prompting events refer to: a prompt event acquired before the target avatar is triggered;
when any event identification in the event identification list is selected, an event detail interface of the history prompt event indicated by the selected event identification is displayed.
In one implementation, the processing unit is further configured to:
deleting the target virtual image in the information service interface after the target virtual image finishes executing the event reminding action;
and when a new prompt event is acquired, displaying a target virtual image in the information service interface.
In one implementation, the processing unit is further configured to:
under the condition of acquiring a target prompt event, acquiring the current mood state of a target user;
controlling the expression of the target virtual image to change from the current expression to a target expression related to the current mood state;
wherein, the current expression means: and when the target prompt event is acquired, the expression presented by the target virtual image.
In an implementation manner, when the processing unit is configured to obtain the current mood state of the target user, the processing unit is specifically configured to:
calling a camera shooting assembly to acquire a face image of a target user; performing expression recognition processing on the face image to obtain the face expression of the target user; predicting the current mood state of the target user based on the facial expression;
or acquiring historical behavior data of the target user, wherein the historical behavior data comprises any one or more of the following: audio and video playing data, text editing data and social data; and performing emotion recognition on the target user according to the historical behavior data to obtain the current mood state of the target user.
In one implementation, the processing unit is further configured to:
acquiring display history information of an information service interface, wherein the display history information comprises: the historical trigger time of triggering and displaying each time within a preset time period of an information service interface;
acquiring a target triggering moment of triggering and displaying an information service interface for the last time before displaying a target virtual image;
and if the displayed historical information only comprises the target triggering moment, determining that the information service interface is displayed on a terminal screen for the first time within a preset time period, and controlling the target virtual image to execute the interactive action.
In one implementation, the processing unit is further configured to:
if any interface element in the information service interface is triggered, acquiring the orientation relation between any interface element and the target virtual image;
and controlling the target virtual image to execute a response action related to any interface element according to the acquired orientation relation.
In one implementation, the processing unit is further configured to:
in the process that the target virtual character executes any action, displaying action description information related to any action in a display area where the target virtual character is located, wherein the action description information is used for describing the execution purpose of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, the processing unit is further configured to:
outputting a target voice audio in the process that the target virtual image executes any action, wherein the target voice audio is generated according to the action description information of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, the processing unit is further configured to:
displaying an image setting interface, wherein the image setting interface comprises a reference virtual image and one or more candidate skin resources;
when a target skin resource is selected from one or more candidate skin resources, updating and displaying a reference virtual image by adopting the target skin resource in an image setting interface;
and if the confirmation operation aiming at the updated reference virtual image is detected, taking the updated reference virtual image as the target virtual image.
In one implementation, when a target avatar appears in an information service interface, the target avatar is in a reference pose; after the target virtual image executes the event reminding action, the target virtual image is in a target pose; the processing unit is further configured to:
counting the duration of the target virtual image in the target pose;
and if the duration is greater than the duration threshold, controlling the target virtual image to recover from the target pose to the reference pose.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit is further configured to:
playing a prompt animation of a new prompt element in the process of executing the event prompt action by the target virtual image;
wherein the prompt animation includes at least one of: an animation that controls the new cueing element to move from the first position to the second position; and performing animation of a target operation on the new prompt element, the target operation including any one or more of: a vibration operation, a telescopic operation, and a rotation operation.
In one implementation, the information service interface is an interface in a target application for a target game, and the processing unit is further configured to:
in the process of displaying the target virtual image, if a game invitation triggering event exists, outputting a game invitation notification to notify the target virtual image to invite a target user to participate in a target game;
if the game invitation notification is triggered, a game screen regarding the target game is output.
On the other hand, an embodiment of the present application provides an event notification processing apparatus, where the apparatus includes:
a processor adapted to execute a computer program;
the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the method for processing the event prompt is implemented.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is adapted to be loaded by a processor and to execute the event prompt processing method as described above.
In another aspect, the present application provides a computer program product or a computer program, which includes computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer readable storage medium, and executes the computer instructions, so that the terminal executes the event prompt processing method.
In the embodiment of the application, when the information service interface is displayed in the terminal screen, the target virtual image can be displayed in the information service interface, the element types in the information service interface are enriched, and the interestingness of interface browsing is increased. When a target prompt event is obtained, if the target prompt event exists in the information service interface, the method also supports controlling the target virtual image to execute an event reminding action corresponding to the target prompt event; therefore, the substitution feeling of the target user to the target virtual image is increased through the interaction between the target virtual image and the target user, and the target user is guided to pay attention to the target prompting event through the target virtual image, so that the target user can pay attention to the target prompting event in time, the prompt timeliness of the target prompting event is effectively improved, the target prompting event is prevented from being omitted, and the attention of the target prompting event is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a block diagram illustrating an architecture of an event alert processing system according to an exemplary embodiment of the present application;
FIG. 1b is a schematic diagram of a 3D character model provided by an exemplary embodiment of the present application;
fig. 2 is a schematic flowchart illustrating an event notification processing method according to an exemplary embodiment of the present application;
FIG. 3a is a diagram illustrating a target avatar displayed in an information service interface according to an exemplary embodiment of the present application;
FIG. 3b is a diagram illustrating a setting of a display position of a target virtual user in an information service interface according to an exemplary embodiment of the present application;
FIG. 3c is a diagram illustrating another example of displaying a target avatar in an information service interface provided by an exemplary embodiment of the present application;
FIG. 3d is a diagram illustrating a control target avatar performing an event reminder action according to an exemplary embodiment of the present application;
FIG. 3e is a schematic diagram illustrating a control of a target avatar to adjust from a current pose to a target pose according to an exemplary embodiment of the present application;
FIG. 3f is a diagram illustrating a trigger target reminder event according to an exemplary embodiment of the present application;
FIG. 3g illustrates a schematic diagram of another control-target avatar performing an event-reminder action provided by an exemplary embodiment of the present application;
FIG. 3h is a schematic diagram illustrating a control of a target avatar to adjust from a current pose to a target pose according to an exemplary embodiment of the present application;
FIG. 3i is a diagram illustrating a trigger target reminder event according to an exemplary embodiment of the present application;
FIG. 3j is a schematic diagram illustrating yet another control-target avatar performing an event-prompting action according to an exemplary embodiment of the present application;
FIG. 3k is a schematic diagram illustrating yet another control target avatar performing an event reminder action according to an exemplary embodiment of the present application;
FIG. 3l is a schematic diagram illustrating a reminder event provided by an exemplary embodiment of the present application;
FIG. 3m is a schematic diagram illustrating another cueing event provided by an exemplary embodiment of the present application;
FIG. 3n is a diagram illustrating an event detail interface displayed by triggering a target avatar according to an exemplary embodiment of the present application;
FIG. 3o is a schematic diagram of another event detail interface presented in an exemplary embodiment of the present application for triggering a target avatar;
FIG. 4 is a flowchart illustrating a method for processing an event prompt according to an exemplary embodiment of the present application;
FIG. 5a is a schematic diagram illustrating updating a skin resource in a character setting interface according to an exemplary embodiment of the present application;
FIG. 5b is a schematic diagram illustrating the deletion of a skinned resource in the image setting interface provided by an exemplary embodiment of the present application;
FIG. 5c is a schematic diagram illustrating an addition of a skinning resource in a character setting interface according to an exemplary embodiment of the present application;
FIG. 5d is a diagram illustrating a triggered display avatar setting interface provided by an exemplary embodiment of the present application;
FIG. 5e is a schematic diagram illustrating a control target avatar performing an interactive action according to an exemplary embodiment of the present application;
FIG. 5f is a schematic diagram illustrating a control target avatar performing an interactive action according to an exemplary embodiment of the present application;
FIG. 5g is a diagram illustrating a reminder animation playing a target reminder event in an information services interface according to an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram of a control target avatar performing a responsive action provided by an exemplary embodiment of the present application;
FIG. 7 is a diagram illustrating an action description information for displaying a target avatar in an information service interface according to an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a game invitation provided by an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram illustrating an event prompt processing apparatus according to an exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram illustrating an event prompt processing device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In an embodiment of the present application, the present application relates to an event cue processing system, which may be as shown in fig. 1 a; the event prompt processing system may include a terminal 101 and a server 102, and the number and types of the terminal 101 and the server 102 are not limited in the embodiments of the present application and are described herein. Among other things, the terminal 101 may include, but is not limited to: smart phones (such as Android phones, iOS phones, etc.), tablet computers, portable personal computers, mobile Internet Devices (MID), smart televisions, vehicle-mounted Devices, head-mounted Devices, etc. capable of performing touch screen. The terminal comprises a terminal screen, also called a display screen, a display screen and the like; an application (which may be referred to simply as an application, such as a social application, a gaming application, a video application, an applet application, a web application, etc.) may run in the terminal. The server 102 may include, but is not limited to: data processing servers, web servers, application servers, and the like have complex computing capabilities. The server 102 may be a background server of the terminal 101, and is configured to perform information interaction with the terminal 101 to provide computing and application service support for the terminal; alternatively, the server 102 may also be a background server of any application running in the terminal 101, and is configured to perform information interaction with the application running in the terminal 101, so as to provide computing and application service support for the any application. The server 102 may be an independent physical server, or may be a server cluster or distributed system composed of a plurality of physical servers. The terminal 101 and the server 102 may be directly or indirectly communicatively connected in a wired or wireless manner, and the connection manner between the terminal 101 and the server 102 is not limited in the embodiments of the present application.
Based on the event prompt processing system shown in fig. 1a, the embodiment of the present application proposes an event prompt processing scheme, where the event prompt processing scheme relates to an avatar; by avatar may be meant: the image adopted by the user can be used for representing the virtual image of the user, and the image can be an imaginary model (such as an unrealistic cartoon model, an animation model and the like) or a real model (such as a character model which is similar to a real character and is displayed in a terminal screen and the like); in the process of using the terminal, the virtual image is adopted, so that the substitution feeling of the user on the virtual image can be enhanced, and the user operation is more immersive. Common avatars may include, but are not limited to: virtual character images (e.g., cartoon character images, etc.), virtual animated characters (e.g., cartoon character images, etc.), and so forth. For convenience of explanation, the virtual character is taken as an example for explanation. The target avatar (e.g., any virtual character) provided in the embodiments of the present application may refer to a 3D character model (or referred to as a three-dimensional character model) built in a product (e.g., an application program, an operating system, etc.); the 3D character model has the advantages of good spatial sense, reality sense, stereoscopic impression and the like, and can further increase the substitution sense of the user on the target virtual image. Specifically, all or part of the body parts (such as the head, the arms and other body parts) of the 3D character model can be bound to the movable skeleton system through the skinning technology, and then the 3D character model is controlled to execute actions (such as event reminding actions) by controlling the movement of the skeleton system.
An exemplary 3D character model may be controlled to perform actions by controlling the skeletal system, as shown in FIG. 1b, after the 3D character model is bound to the skeletal system. For example: binding the head of the 3D character model to the movable head skeleton; when the head of the 3D character model needs to be controlled to execute movement, the dimension can be established on the head of the 3D character model (for example, a coordinate system taking the face center position as the center, the horizontal plane as the x axis and the vertical line as the y axis is established), and the head of the 3D character model can be controlled to execute various actions by controlling the skeleton of the head to run along different coordinate directions on the basis of the established dimension. For another example: binding the arms (including big arm, small arm, palm, etc.) of the 3D character model to the movable arm skeleton; when the arms of the 3D character model need to be controlled to perform movement, a dimension can be established at the arms of the 3D character model (for example, the elbow 10 is taken as the center, and the upper arm and the lower arm move around the elbow 10), and the bones at all positions of the arms are controlled to perform movement on the basis of the established dimension, so that the arms of the 3D character model are controlled to perform various actions; and so on. In addition, the embodiment of the application can also configure various expression animations for the face of the 3D character model, such as smiling, laughing, puzzles and other expression animations; when the 3D character model is required to express a certain expression, the corresponding expression animation is directly played, the control of the expression of the 3D character model is realized, the expression mode of the 3D character model is enriched, and the interestingness is increased. It should be noted that the above describes the target avatar as a 3D character model, but it is understood that other types of target avatars (e.g., animation avatars, etc.) are also suitable for the above-described implementation and will not be described in detail herein.
The event prompt processing scheme provided by the embodiment of the application can be executed by mutual interaction between the terminal 101 and the server 102 in the event prompt processing system shown in fig. 1 a; in this implementation, the main flow of executing the event notification processing scheme can be seen in fig. 1a, which mainly includes steps s11-s13, where: s11, the server 102 sends the configuration file to the terminal 101 (or a target application (e.g. any application) running in the terminal 101); the configuration file may include, but is not limited to, the following information: trigger conditions (i.e., conditions that trigger the target avatar to perform an action, such as conditions to obtain a target prompt event), animation instructions (i.e., instructions that control the target avatar to perform an action), data interface files (i.e., a file that interprets other information or files, such as a file identifying the location coordinates of a new prompt element, etc.), and so forth. s12, the terminal 101 receives the configuration file sent by the server 102, and detects whether a trigger condition included in the configuration file exists according to the configuration file; for example, the configuration file contains trigger conditions including: and displaying the video prompt to be issued in the information service interface displayed on the terminal screen, and if the terminal detects that the video prompt to be issued appears in the information service interface displayed on the terminal screen, determining to detect the trigger condition contained in the configuration file. s13, when the terminal 101 detects the trigger condition, it can obtain the dynamic effect command corresponding to the trigger condition according to the detected trigger condition, and control the target avatar to execute the action corresponding to the trigger condition according to the dynamic effect command, specifically control the skeleton system bound by the target avatar to execute the corresponding action. Based on the steps s11-s13, the configuration file is sent to the terminal by the server, so that the terminal can consume no energy, interactive fluency can be ensured in the process of interaction between the target virtual image and the user, and the operation experience of the user is ensured; moreover, when the version of the terminal has the requirement of changing, the success of changing the version can be saved.
In other implementation manners, the local storage space of the terminal 101 may be configured with a configuration file, so that the event notification processing scheme proposed in this embodiment may also be executed by the terminal 101 in the event notification processing system shown in fig. 1a, or executed by any application running in the terminal 101. In this implementation, the terminal 101 may detect whether a trigger condition exists according to the configuration file in the local storage space; if the target prompting event exists, the target prompting event is obtained, and at the moment, the dynamic effect instruction contained in the configuration file can be called to control the target virtual image to execute the event reminding action corresponding to the target prompting event. According to the scheme, the event reminding action corresponding to the target reminding event is executed by newly adding the target virtual image, so that the user and the target reminding event are closely related, the interactivity between the user and the target reminding event is increased, the attention of the target reminding event is improved, the target user can pay attention to the target reminding event in time, and the reminding timeliness of the target reminding event is effectively improved.
For convenience of illustration, the following description will use a terminal to execute the event notification processing scheme mentioned in the embodiment of the present application as an example, but it should be understood that such an example does not limit the embodiment of the present application, and thus the description is given here.
Based on the above-described event prompt processing scheme, a more detailed event prompt processing method is provided in the embodiments of the present application, and the event prompt processing method provided in the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an event notification processing method according to an exemplary embodiment of the present application; the event alert processing method may be performed by the terminal 101 in the system shown in fig. 1a, and includes, but is not limited to, steps S201-S203:
s201, displaying an information service interface.
When a target user (i.e., a user of a terminal) opens the terminal, an information service interface may be displayed in a terminal screen of the terminal. The information service interface here may refer to: system service interfaces of the terminal, for example: the terminal comprises a main interface (namely, an interface containing application identifications of one or more application programs run by the terminal), a system configuration interface (namely, an interface for configuring attribute information of the terminal, wherein the attribute information comprises information such as volume size, screen brightness and the like), \8230 \ 8230;, and the like. Or, the information service interface may also refer to a service interface provided by any application running in the terminal; applications may include, but are not limited to: a client installed in a terminal, an applet that can be used without download installation, a web application opened through a browser, and the like; for example, if the application is a social application installed in the terminal, the information service interface of the social application may include: social session interfaces, contact list interfaces, personal hub interfaces, \8230;, and so on in social applications. For convenience of description, the following description will take an information service interface as an example of any service interface provided by a target application running in the terminal (i.e., any application running in the terminal).
And S202, displaying the target virtual image in the information service interface.
The implementation manner of displaying the target avatar may include the following two ways: (1) and displaying the target virtual image in an information service interface, wherein the information service interface can comprise a system service interface of the terminal or a service interface of a target application program. An exemplary schematic diagram for displaying a target avatar in an information service interface can be seen in fig. 3a, as shown in fig. 3a, a target avatar 3011 is included in an information service interface 301, and the target avatar 3011 is specifically displayed in a designated area of the information service interface 301; the designated area may be set by a manager or set by a target user, and the manner of setting the display position of the target avatar in the information service interface by the target user can be shown in fig. 3b, when the target user drags the target avatar 3011 in the information service interface from the right position of the terminal screen to the left position of the terminal screen to release, it indicates that the target user wants to place the target avatar at the left position of the terminal screen, and the target avatar is displayed at the position released by the target user (i.e. the left position of the terminal screen).
(2) And displaying the target virtual image in a floating manner on the information service interface. Specifically, another interface with a display level higher than that of the information service interface is displayed above the display level of the information service interface, which is temporarily referred to as a cover interface, and the cover interface can be displayed above the information service interface with 100% transparency, so that interface elements included in the information service interface can be seen clearly through the cover interface; the target avatar can be displayed in the overlay interface, so that the target avatar can be displayed on the information service interface in a floating manner, and at the moment, the target avatar may cover part of interface elements of the information service interface for display. As shown in fig. 3c, a target avatar 3011 is displayed suspended over the information service interface 301, the target avatar 3011 appearing as a bear. It can be understood that, similar to which display position of the target avatar is specifically displayed in the information service interface, the display position of the target avatar in the overlay interface, that is, the display position of the target avatar suspended above the information service interface, may be set by a manager or set by a target user, which is not described herein.
It is understood that the target avatar 3011 shown in fig. 3a, 3b and 3c is an exemplary introduction, and in other scenarios, the target avatar may be represented as other adorned avatars (e.g. cartoon characters), and the display position and size of the target avatar in or on the information service interface may be changed. The embodiment of the present application does not limit the type, display position, display size, and the like of the target avatar, and is described herein.
And S203, responding to the acquired target prompting event, and controlling the target virtual image to execute an event reminding action corresponding to the target prompting event.
And according to different acquired target prompting events, the event reminding actions executed by the target virtual image and corresponding to the target prompting events are different. The target prompt event may include, but is not limited to: events with new prompt elements in the information service interface, prompt events generated according to prompt information located outside the information service interface, \8230;, etc. Taking the above-mentioned two types of target prompting events as an example, the following introduces an event reminding action corresponding to the target prompting event, which is executed by controlling the target avatar when each target prompting event is obtained, wherein:
(1) An event of a new prompt element exists in the information service interface. Wherein, the new cue element may refer to: the method comprises the steps that in the process of displaying an information service interface on a terminal screen, elements appearing in the information service interface; or, in the process of displaying the information service interface on the terminal screen, an element existing in the information service interface after being updated is an element; and so on. The elements may include, but are not limited to: text messages, pictures, audio-visual indicia (e.g., audio-visual pictures, cards containing audio-visual, etc.), emotions, animations, and the like. For example, the information service interface is a main interface of the terminal, if any application program displayed in the main interface receives a prompt notification of a new message, a notification icon may be displayed in a display area where the any application program is located, it is determined that the new prompt element is the notification icon, and the event that the new prompt element exists in the information service interface includes: and displaying the event generated by the notification icon in the main interface. For another example, if the information service interface is a personal center interface (e.g., a personal main page of a wechat game) provided by a social application (e.g., a wechat application) run by the terminal, and if an interface prompt (i.e., a notification style for guiding the target user to perform the next operation, such as a prompt for completing personal details) newly appears on the personal center interface, determining that the new prompt element is the interface prompt, and an event that the new prompt element exists in the information service interface includes: displaying the generated events of the interface prompts in the personal center interface.
When a new prompt element exists in the information service interface, the target virtual image can be controlled to execute an event reminding action related to the new prompt element; the event reminding action is executed through the target virtual image, so that the target user can be guided to pay attention to a new prompt element appearing in the information service interface, and the attention of the new prompt element can be improved. The event reminding action performed by the control target avatar about the new prompt element may include, but is not limited to: controlling the gesture change action executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface (for example, the new prompt element appears at the top position of the information service interface); or controlling the action of position change executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface; or controlling the actions of posture change and position change executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface.
Various exemplary implementations of the above described control target avatar performing event reminder actions are described below with reference to specific examples.
(1) Controlling an event reminding action performed by the target avatar regarding the target cue event, comprising: and controlling the gesture change action executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface (for example, the new prompt element appears at the top position of the information service interface). In specific implementation, if a target prompt event exists in the information service interface, that is, when a new prompt element appears in the information service interface, the target avatar may be controlled to adjust from the current posture to the target posture, so that the target avatar performs an event reminding action corresponding to the new prompt element. Wherein, the current posture may refer to: when the target prompt event is acquired, the posture of the target virtual image is the posture of the target virtual image when the target virtual image appears in the information service interface. Assuming that the target avatar can maintain any posture for 3 seconds after completing any posture, if a plurality of new cue elements, such as new cue element 1, new cue element 2 and new cue element 3, appear at different positions in the information service interface continuously (e.g. at intervals of several milliseconds), and new cue element 1, new cue element 2 and new cue element 3 appear at different positions in the information service interface at intervals of 2 milliseconds, respectively, for new cue element 2, when new cue element 2 appears in the information service interface, the current posture of the target avatar is the posture performed by the target avatar when new cue element 1 exists in the information service interface. Wherein the target pose may be determined according to a target orientation relationship between the new prompt element and the target avatar; specifically, the process of obtaining the target orientation relationship between the new prompt element and the target avatar may include: acquiring a first position coordinate of the new prompt element in the information service interface and a second position coordinate of the target virtual image in the information service interface; calculating a target orientation relationship between the target avatar and the new prompt element based on the first position coordinates and the second position coordinates, the target orientation relationship indicating: the new prompt element is positioned in the target direction of the target virtual image; controlling the target avatar to adjust from the current posture to the target posture according to the target orientation relationship, wherein the target posture of the target avatar may include: a gesture in which one or more body parts (e.g., arms, fingers, head, etc.) in the target avatar are oriented to match the target direction; wherein, the matching of the orientation of the body part with the target direction may be: the body part may be completely aligned with the target direction, or the body part may be at an angle (e.g., 30 degrees) with the target direction, and so on, which is not limited in this application. The position coordinates in the above description may be obtained by parsing the configuration file of the above description, and are described herein.
In one implementation, the target orientation relationship between the new cue element and the target avatar indicates: the new prompt element is positioned above the target avatar, for example, to introduce a process of controlling the target avatar to perform an event reminding action with respect to the new prompt element. As shown in fig. 3d, a new prompt element 302 appears in the information service interface 301, for example, the new prompt element 302 may include "please refine personal data as soon as possible" to prompt the target user to refine personal information; at this point it may be determined that the target positional relationship between the new prompt element 302 and the target avatar 3011 indicates: the new prompt element 302 is positioned in a direction above the target avatar 3011, the target pose that the target avatar 3011 needs to perform may include: the orientation of one or more body parts in the target avatar matches the pose of the up direction, such as controlling the orientation of the right arm of the target avatar to match the pose of the up direction. Specifically, an exemplary motion demonstration process for adjusting the control target avatar from the current posture to the target posture can be seen in fig. 3e, as shown in fig. 3e, first, the head of the control target avatar 3011 is rotated to the left side by a first angle (e.g. 30 degrees) along the y axis; secondly, the head of the control target avatar 3011 rotates upward by a second angle (e.g., 20 degrees) along the x-axis; secondly, the right hand gesture of the control target virtual image is changed into a directional gesture, namely a certain finger (such as an index finger) is straightened, and other fingers are bent inwards to be in a single-finger state; finally, the right arm of the control target avatar 3011 is rotated upward by a third angle (e.g., 60 degrees), and the forearm is controlled to rotate clockwise by a fourth angle (e.g., 5 degrees) around the elbow 10; and finally, obtaining the target posture that the target virtual image 3011 points to the new prompt element 302. In the above process, the specific angle of rotation of each body part of the target avatar is determined by the target orientation relationship between the target avatar and the new prompt element, which is not limited in the embodiment of the present application. As described above, the movement of the target avatar is achieved by controlling the movement of the skeletal system, and the above process describes the change of the posture of the target avatar, which is essentially accomplished by controlling the skeletal system to perform a series of operations; it is seen to the user that the target avatar is performing a series of actions, described herein.
Of course, the new prompt element 302 in the information service interface shown in fig. 3e can also be triggered, and when the new prompt element 302 is triggered, the terminal screen jumps from the information service interface to a new interface, which is associated with the new prompt element 302. As shown in fig. 3f, if the new prompt element 302 is "please improve the personal data as soon as possible", when the new prompt element 302 is triggered, the data setting interface 303 is displayed in the terminal screen, and the target user can set the personal data (or personal information) in the data setting interface 303; when an interface return operation (e.g., a trigger return option, or a slide interface) is detected at the profile setting interface 303, a return can be made from the profile setting interface 303 to the information services interface 301. It should be noted that the data setting interface 303 may also display a target avatar 3011, and if a target prompting event exists in the data setting interface, the target avatar displayed in the data setting interface 303 may also perform an event reminding action corresponding to the target prompting event, which is not described in detail herein.
In another implementation, the target orientation relationship between the new prompt element and the target avatar indicates: and the new prompt element is positioned in the lower direction of the target virtual image as an example, and introduces the process of controlling the target virtual image to execute the event reminding action corresponding to the new prompt element. As shown in fig. 3g, a new prompt element 302 is displayed in the information service interface, for example, the new prompt element 302 may include: a target user records a game audio/video clip obtained when a target game (such as any game) is operated in a history time period (such as a target time period before the current time, and if the current time is 2 months, 8 days and 3, 00, the history time period can be from 2 months, 1 days and 00 to 2 months, 8 days and 3); alternatively, the target user may obtain a video, a photograph, etc. over a historical period of time, which may include, but is not limited to: the camera shooting method comprises the steps of receiving the shooting result from other equipment, downloading the shooting result through the Internet and adopting a camera; and so on. At this point it may be determined that the target positional relationship between the new prompt element 302 and the target avatar 3011 indicates: the new cue element 302 is located in a direction below the target avatar 3011, the target pose that the target avatar 3011 needs to perform may include: the target avatar has one or more body parts oriented in a posture matching the downward direction, such as a posture in which the right arm of the control target avatar is oriented in a direction matching the downward direction. Specifically, an exemplary motion demonstration process for adjusting the control target avatar from the current posture to the target posture can be seen in fig. 3h, as shown in fig. 3h, first, the head of the control target avatar 3011 is rotated to the left side by a fifth angle (e.g. 20 degrees) along the y axis; secondly, the head of the control target avatar 3011 rotates downward by a sixth angle (e.g., 25 degrees) along the x-axis; finally, the right hand gesture of the control target virtual image is changed into a directional gesture, namely, a certain finger (such as an index finger) is straightened, and other fingers are bent inwards to be in a single-finger state; the target pose of the target avatar 3011 pointing to the new prompt element 302 is finally obtained. The specific rotation angle of each body part of the target virtual image in the process is determined by the orientation relation between the target virtual image and the new prompt element, and the method is not limited in the embodiment of the application. In addition, which actions are executed by the target avatar in the actual application scene are controlled to the end, and the execution sequence of each action and the like, which are not limited in the embodiments of the present application, are described herein; for example, when the target direction is the downward direction, only the head of the target avatar may be controlled to move, and the gesture may not be controlled to change.
Of course, the new prompt element 302 in the information service interface shown in FIG. 3h may also be triggered. Continuing with fig. 3i, the new hint element 302 is a game video clip to be released, which is generated by the target user operating the target game in a historical period of time; when a new prompt element 302 is triggered (or a publishing option of a display area where the new prompt element 302 is located is selected), the terminal screen displays a dynamic publishing interface 304, where the dynamic publishing interface 304 includes an identifier 3041 of a game video clip (e.g., a first frame picture of the game video clip), the dynamic publishing interface 304 further includes other options, such as an expression option, a tag option, a font option, a picture option, and the like, and a target user can add part of content included under any option to the dynamic publishing interface 304 by triggering any option; when a publishing operation (e.g., triggering publishing option 3042, or a gesture operation) is detected in dynamic publishing interface 304, it is determined to publish a dynamic state for the game video clip.
In summary, no matter which display position of the new prompt element appears in the terminal screen, the body part of the target avatar can be controlled to face the target direction of the new prompt element according to the target orientation relationship between the target avatar and the new prompt element, so that the target user can be intuitively guided to pay attention to the newly appearing prompt element, and the attention of the new prompt element is improved.
(2) Controlling the target avatar to perform an event-prompting action corresponding to the target-prompting event, comprising: and controlling the action of position change executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface. In a specific implementation, if a target prompt event exists in the information service interface, namely a new prompt element appears in the information service interface, the target avatar can be moved from the current position to the target position, so that the target avatar performs an event reminding action related to the new prompt element; wherein the current position of the target avatar may refer to: when a target prompt event is obtained, the position of the target virtual image is located; the target position of the target avatar may refer to: and the position of the new prompt element in the information service interface. In other words, when a new prompt element exists in the information service interface, the target avatar can be controlled to move from the current position to the position of the new prompt element, so that the target avatar can be controlled to execute an event reminding action related to the new prompt element. Further describing the above implementation in conjunction with fig. 3j, as shown in fig. 3j, the target avatar 3011 appears as a little bear; when a new prompt element does not exist in the information service interface 301, the target avatar 3011 is located at the current position, and when a new prompt element appears in the information service interface 301, such as a new prompt element 302 appears at the lower left side of the information service interface (i.e., the target position), it is determined that the target position where the target avatar needs to be moved includes any position of the display area where the new prompt element 302 is located (i.e., the lower left side of the terminal screen), and the target avatar 3011 is controlled to move from the current position to the target position.
(3) Controlling an event reminder action performed by the target avatar regarding the target reminder event, comprising: and controlling the gesture change and the position change executed by the target virtual image according to the azimuth information when the new prompt element appears in the information service interface. In the specific implementation, if a target prompt event exists in the information service interface, namely a new prompt element appears in the information service interface, the target virtual image can be controlled to be adjusted from the current posture to the target posture; and moving the target avatar from the current location to the target location to cause the target avatar to perform an event alert action with respect to the new prompt element. In other words, if a new cue element exists in the information service interface, both the posture and the position of the control target avatar can be changed in the information service interface. This implementation can be seen in fig. 3k, where as shown in fig. 3k, the target avatar 3011 appears as a little bear; when a new cue element does not exist in the information service interface 301, the target avatar 3011 is displayed at the current position in the current posture, when a new cue element appears in the information service interface 301, if a new cue element 302 appears at the lower left side (i.e., the target position) of the information service interface, it is determined that the new cue element is located in the lower direction of the target avatar, the target avatar 3011 is controlled to move from the current position to the target position, and when the target avatar 3011 reaches the target position, the target avatar is controlled to adjust to the target posture. It is to be understood that since the target avatar 3011 does not always maintain the current posture as described above during the movement of the target avatar 3011 in the information service interface, the target avatar 3011 does not have to be adjusted from the current posture to the target posture when it reaches the target position, as shown in fig. 3k, the bear is adjusted from the running posture to the target posture, which is described herein.
It is worth to be noted that, in the embodiment of the present application, the target avatar is displayed in the information service interface to introduce and control the change of the posture of the target avatar, and the target avatar is displayed on the information service interface in a floating manner to introduce and introduce the change of the display position, and the posture of the target avatar, without limiting the embodiment of the present application. In other words, when the target avatar is displayed on the information service interface in a floating manner, the posture of the target avatar can be controlled to change according to the new prompt element; alternatively, when the target avatar is displayed in the information service interface, the display position, display position and posture of the target avatar may be controlled to change according to the new prompt element, which is described herein.
(2) And generating a prompt event according to the prompt message outside the information service interface. According to different prompt messages, the generated prompt events are different, and the event reminding actions executed by the control target virtual image are different. The following prompting messages and prompting events are exemplified, wherein:
(1) the prompt message includes schedule information, and the prompt event generated according to the prompt message may include: and when detecting that the current system time of the terminal is equal to the time in the schedule information or the current system time is less than the time in the schedule information and the difference between the current system time and the time in the schedule information is less than a time threshold value. For example, the schedule information set by the user is: at 9 of 12 days 2 month, 30 to join the conference, then when the current system time of the terminal is detected to equal 9 of 12 days 2 month, 30, this indicates that the user needs to be prompted to join the conference. As another example, it is detected that the current system time is 9 of 2 months and 12 days, 25 is less than the time of the schedule information: 9/12/2, and the difference between the current system time and the time in the schedule information is less than a time threshold (e.g., 6 minutes), indicating that the user needs to be prompted to attend the meeting. Of course, the schedule information set by the user may be, besides the time referred to in the meeting described above, the time taken by a vehicle (such as an airplane, a high-speed rail, and the like), and the schedule information set by the user is not limited in the embodiment of the present application.
The schedule information set by the user is as follows: taking the conference participation 9/12/2: the control target avatar performs an action of picking up a briefcase and looking at the watch with a head down, or the control target avatar performs an action of sitting at a round table, gazing at the projector with two eyes, and the like. As shown in the first diagram of fig. 3l, if it is necessary to remind the user to participate in the meeting, the target avatar may be controlled to perform an event reminding action of participating in the meeting, such as controlling the target avatar to perform an action of picking up a briefcase for walking.
(2) The prompt message includes historical activity information, and the prompt event generated according to the prompt message may include: predicting events of a user needing to participate in target activities in the current system time or a target time period after the current system time according to historical activity information; wherein the historical activity information of the user comprises: historical motion information, historical game time, and the like. For example, the historical activity information includes historical movement information, and the historical movement information of the user is: a morning running activity will be performed at 7. Taking the historical activity information of the user including the historical motion information as an example, as shown in the second graph of fig. 3l, assuming that the historical motion information of the user indicates that the user will perform push-up motion at 9:00 pm, when it is detected that the current system time is equal to 9 pm: the apparel of the target avatar is updated to serve the sport, and the target avatar is controlled to perform the motion of push-up sport.
(3) The prompt message includes real-time information, and the prompt event generated according to the prompt message may include: and detecting an event needing to prompt the user to pay attention to the change of the external physical environment according to the real-time information. For example, if the real-time information includes the information about the impending rainfall, the user is reminded of the weather change according to the real-time information. If the real-time information includes that the temperature at the current moment is higher than the temperature threshold, the user needs to be reminded of the temperature change condition according to the real-time information. If the user is driving a car and the real-time information includes congestion of a road section ahead, the user needs to be reminded of the road condition ahead according to the real-time information. Taking the real-time information including the upcoming rain as an example, as shown in the first diagram of fig. 3m, when it is detected that it is imminent to rain, the target avatar may be controlled to perform an event reminding action of opening the umbrella, so as to remind the user that it is imminent to rain, please take the umbrella with him.
(4) The prompt message includes regularity information, and the prompt event generated according to the prompt message may include: and detecting an event of the user needing to execute the target operation according to the regularity information. For example, if the regularity information includes that the user has a meal at 12 am, the user needs to be reminded to have a meal according to the regularity information when detecting that the current system time of the terminal is 12 am. For another example, if the regularity information includes that the current system time is 12 noon and the real-time geographical location of the user indicates that the user is in or near a restaurant, the user needs to be reminded to eat according to the regularity information. Taking the example that the regularity information is included in meal at 12 noon, 00, as shown in the second graph of fig. 3m, when it is detected that the current system time of the terminal is 12 noon, the target avatar may be controlled to perform an event reminding action of meal, so as to prompt the user that the meal time is up.
It should be noted that, the above description is only provided with some exemplary prompting messages and prompting events generated according to the prompting messages, but it should be understood that the prompting messages and the prompting events generated according to the prompting messages are not limited in the embodiments of the present application, and are specifically described herein.
The embodiment of the application also supports an event detail interface for triggering the target virtual image to display the target prompt event association. Specifically, if the target prompt event is associated with an event detail interface, the event detail interface is used for displaying the content associated with the target prompt event; for example, the target cue events include: an event that occurs when it is about to rain is detected, then the event detail interface associated with the target reminder event may be a weather forecast interface (as shown in fig. 3 m) in which: weather conditions for each time period today, weather conditions for the next few days, etc.; the embodiment of the application also supports an event detail interface which directly triggers the target virtual image to realize the triggering and displaying of the target prompt event.
In one implementation, in the process of controlling the target avatar to execute the event reminding action, or within a preset time (such as 1 second or 2 seconds) after the target avatar executes the event reminding action, if the target avatar is triggered, the event detail interface of the target reminding event is output. In other words, in the process of controlling the target avatar to execute the event reminding action corresponding to the target reminding event in response to the acquired target reminding event, if the target avatar is detected to be triggered, the event detail interface of the target reminding event can be directly output. As shown in fig. 3n, the target prompt event includes an event that it is detected that it is about to rain, the target avatar is controlled to perform an umbrella opening reminding action, and if the user triggers the target avatar at this time, the event detail interface of the target prompt event, that is, the weather forecast interface, can be skipped from the information service interface and displayed. In the process, the target virtual image of the event reminding action being executed is triggered, the event detail interface of the target reminding event corresponding to the event reminding action can be quickly switched, a user can be helped to quickly access the event detail interface, and the operation is simple and convenient.
In other implementation manners, when the target avatar does not execute the event reminding action, if it is detected that the target avatar is triggered, an event identifier list is output, where the event identifier list includes: one or more event identifications corresponding to historical prompting events, wherein the historical prompting events refer to: a prompt event acquired before the target avatar is triggered; when any event identification in the event identification list is selected, an event detail interface of the history prompt event indicated by the selected event identification is displayed. In other words, if it is detected that the target avatar is triggered without the target avatar performing the event alert action, the event identifier of the history alert event may be selected from the output event identifier list, and an event detail interface of the selected history alert event may be displayed in the terminal screen. In the process, even if the target avatar finishes executing the event reminding action, the user can obtain the historical reminding event by triggering the target avatar, so that omission of the target reminding event caused by the fact that the user does not pay attention to the target avatar to execute the event reminding action is avoided, and the user viscosity is improved.
An exemplary procedure of this implementation can be seen in fig. 3o, as shown in fig. 3o, when the target avatar in the information service interface does not perform the event reminding action, after the target avatar is triggered, an event identifier list 305 may be output, where the event identifier list 305 includes event identifiers corresponding to one or more historical prompting events, and the one or more historical prompting events may be prompting events acquired within a period of time (e.g., 30 minutes) before the target avatar is triggered, such as event identifier 1 corresponding to historical prompting event 1, event identifier 2 corresponding to historical prompting event 2, event identifier 3 corresponding to historical prompting event 3, 8230, etc.; when the user selects the event identifier 1 corresponding to the history prompting event 1 in the event identifier list 305, an event detail interface of the history prompting event 1 may be displayed in the terminal screen, and if the history prompting event 1 includes an event that is detected to be rained, the event detail interface of the history prompting event 1 may include a weather forecast interface. The process of triggering the display of the target avatar and outputting the event identifier list shown in fig. 3o is only an exemplary description, and other animation representations are also supported in the embodiment of the present application, for example, when the user continuously triggers the event identifier 1 in the event identifier list, the target avatar may perform an event reminding action of the historical event reminder 1 of the event identifier 1, and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, when the information service interface is displayed in the terminal screen, the target virtual image can be displayed in the information service interface, the element types in the information service interface are enriched, and the interestingness of interface browsing is increased. Under the condition that a target prompt event exists in the information service interface, the control target virtual image is supported to execute an event reminding action corresponding to the target prompt event; therefore, the substitution feeling of the target user to the target virtual image is increased through the interaction between the target virtual image and the target user, and the target user is guided to pay attention to the target prompting event through the target virtual image, so that the target user can pay attention to the target prompting event in time, the prompt timeliness of the target prompting event is effectively improved, the target prompting event is prevented from being omitted, and the attention of the target prompting event is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another event prompt processing method according to an exemplary embodiment of the present application; the event alert processing method may be performed by the terminal 101 in the system shown in fig. 1a, and includes, but is not limited to, steps S401-S404:
s401, displaying an information service interface.
It should be noted that, for the specific implementation process shown in step S401, reference may be made to the related description of the specific implementation process shown in step S201 in the embodiment shown in fig. 2, and details are not described herein.
S402, displaying the target virtual image in the information service interface.
As described in the embodiment shown in fig. 2, the information service interface may be an interface provided by any application running in the terminal; when the any application is a target application for a target game, then the information service interface is an interface in the target application for the target game, where the target avatar may comprise a virtual game object used by the target user in the target game; wherein, the target application related to the target game may refer to: a game application for running a target game; or other applications that are related to the target game (e.g., may browse commentary information about the target game, etc.), but do not run the target game. In one scenario, if the target user has not used any virtual game object in the target game, e.g., the target application is a game application, and the target user is a first time to log in and register the target application, the target avatar displayed in the terminal screen may include: the target user selects from a plurality of virtual game objects output from the terminal screen; alternatively, the target user may be allocated in the background, and the allocation principle may include, but is not limited to: random assignment, assignment based on the usage heat of each virtual game object in the target game, assignment based on relevant information of the target user (e.g., usage data of the target user in other games), \8230, \\ 8230, etc.
Certainly, the target virtual image can also be set by the target user in a self-defined way, so that the target virtual image preferred by the target user can be displayed in a terminal screen used by the target user according to the preference of the target user, and the selectivity of the target user is enriched. As described above, the target avatar according to the embodiment of the present application is obtained by binding a character model (e.g., a 3D character model) to a skeleton system; the target user custom setting the target avatar may include: the target user customizes the body parts of the character model (such as eyes, ears, facial contours, etc.), or the target user customizes the resources (such as clothing, hair accessories, earrings, hand-held props, etc.) that are overlaid on the body parts of the character model; specifically, the target user may perform a resource adjustment operation on the character model to obtain the target avatar, where the resource adjustment operation performed on the character model may include: updating adjustment operation, deleting adjustment operation and adding adjustment operation. The update adjustment operation may refer to an adjustment operation for updating one or more original skin resources (i.e., the aforementioned resources bound to or overlaid on the body part of the skeletal system) in the reference avatar; the deleting adjustment operation may be an adjustment operation of deleting one or more original skin resources in the reference avatar; the newly added adjusting operation may be an adjusting operation of adding one or more skin resources on the basis of the reference virtual image. Wherein the reference avatar may include: the initial virtual image set by the manager when developing the virtual image, or the virtual image obtained when the target user sets the virtual image in the past time period and the current latest time.
The following describes the above mentioned resource adjustment operations in more detail, wherein:
(1) The resource adjustment operation includes an update adjustment operation. In specific implementation, an image setting interface is displayed in a terminal screen, and the image setting interface comprises a reference virtual image and one or more candidate skin resources; when a target skin resource is selected from one or more candidate skin resources, updating and displaying a reference virtual image by adopting the target skin resource in an image setting interface; and if the confirmation operation aiming at the updated reference virtual image is detected, taking the updated reference virtual image as the target virtual image. An exemplary schematic diagram of updating a skin resource in an image setting interface may be seen in fig. 5a, as shown in fig. 5a, an image setting interface 501 includes an image display area 502, and a reference avatar 5021 is displayed in the image display area 502. The image setting interface 501 further includes a resource selection area 503, the resource selection area 503 includes shortcut options of different types, such as shortcut option 5031, shortcut option 5032, \8230, and the like, and each shortcut option is associated with a candidate skin resource matched with the shortcut option; when any shortcut option is selected, the selected shortcut option may be highlighted in the resource selection area 503 (e.g., the selected shortcut option is displayed with a gray scale value larger than that of other shortcut options, the selected shortcut option is displayed with a transparency lower than that of other shortcut options, etc.), and the candidate skin resource associated with the selected shortcut option is displayed in the resource selection area 503; as shown in fig. 5a, assuming that the shortcut option 5031 is an option for referring to the hair style resource of the avatar, when the shortcut option 5031 is selected, at least one candidate hair style resource is displayed in the resource selection field 503.
When any one of the candidate skin resources displayed in the resource selection area 503 is selected, it indicates that the target user wants to replace and display the corresponding original skin resource in the current reference avatar as the selected candidate skin resource; for example, hair style resources 50311, hair style resources 50312, hair style resources 50313, \ 8230; \8230;, etc. are displayed in the resource selection field 503; when the hair style resource 50312 is selected, indicating that the target user wants to replace and display the original hair style resource in the reference avatar as the style displayed by the hair style resource 50312, the reference avatar is updated and displayed in the avatar setting interface 501, and the hair style resource of the updated reference avatar is represented as the style displayed by the hair style resource 50312. A cancel option 504 and a finish option 505 are also included in the character setting interface 501; when the cancel option 504 is selected, determining that the target user abandons the resource adjustment operation on the reference avatar; on the contrary, when the finish option 505 is selected, representing that the target user designates a confirmation operation, it is determined that the reference avatar displayed in the current avatar setting interface 501, which may be the updated reference avatar, is to be the target avatar. It should be noted that, the skin resource updating is implemented by selecting the target skin resource in the resource selection area, and in other scenarios, the target skin resource may also be dragged from the resource selection area to the reference avatar (or dragged to the reference avatar at the original skin resource corresponding to the target skin resource) to implement the reference avatar updating.
(2) The resource adjustment operation includes a delete adjustment operation. In the specific implementation, an image setting interface is displayed in a terminal screen, and the image setting interface comprises a reference virtual image; when the deletion operation aiming at any original skin resource in the reference virtual image is detected in the image setting interface, the updated reference virtual image is displayed in the image setting interface and is used as the target virtual image. The deleting operation for any original skin resource in the reference virtual image in the image setting interface can include but is not limited to: dragging any original skin resource to a designated direction, selecting a deletion option of a region where any original skin resource is located, and the like. Taking the deletion operation including a drag operation of any original skin resource in a specified direction as an example, an exemplary schematic diagram of deleting a skin resource in an image setting interface may be seen in fig. 5b, as shown in fig. 5b, when a drag operation of a skin resource-glasses upward is detected in the image setting interface, it is determined that a target user wants to delete glasses included in a reference avatar, an updated reference avatar is displayed in the image setting interface, and the updated reference avatar does not include glasses.
(3) The resource adjustment operation includes a new addition adjustment operation. In specific implementation, an image setting interface is displayed in a terminal screen, and the image setting interface comprises a reference virtual image; when detecting a new operation aiming at any original skin resource in the reference virtual image in the image setting interface, displaying the updated reference virtual image in the image setting interface, and taking the updated reference virtual image as a target virtual image. The new adding operation for any original skin resource in the reference virtual image in the image setting interface may include but is not limited to: dragging the target skin resource from the resource selection area to the reference virtual image, selecting the target skin resource in the resource selection area, and the like. An exemplary schematic diagram of adding skin resources in an image setting interface may be seen in fig. 5c, as shown in fig. 5c, where a reference virtual image is displayed in the image setting interface without skin resources — glasses, and when a target user selects glasses in the resource selection region, an updated reference virtual image may be displayed in the image setting interface, and the updated reference virtual image includes glasses.
In summary of the related contents described in the above implementation manners (1), (2), and (3), the following contents need to be further explained in the embodiments of the present application: (1) fig. 5a, fig. 5b, and fig. 5c are all described by taking the candidate skin resources in the resource selection area as partial skin resources (such as face, trunk, limbs, hair accessories, etc.) of the reference avatar, but it should be understood that this example does not limit the setting of the target avatar provided in the embodiment of the present application. In other implementation manners, the candidate skin resources displayed in the image setting interface may be a whole virtual image resource, so that when the target user selects any one of the candidate skin resources, the whole reference virtual image is displayed in the image setting interface as the selected candidate skin resource in a replacement manner. In other implementations, the candidate skinning resources displayed in the image setting interface may also include both the entire avatar resources and partial skinning resources of the avatar. (2) In addition, because the display area of the terminal screen is limited, the shortcut option in the image setting interface may be partially hidden, so the image setting interface may include a sliding shaft, and the hidden shortcut option may be slidably displayed by operating the sliding shaft. Similar to the sliding display of the hidden shortcut option in the avatar setting interface, the embodiment of the present application also supports the sliding display of the candidate skin resource associated with any shortcut option in the avatar setting interface, which is described herein.
In addition, the embodiment of the application does not limit how to trigger the display of the image setting interface. For example, if the information service interface is a system configuration interface, an image setting interface can be triggered and displayed in the attribute configuration interface of the terminal; for another example, if the information service interface is a service interface of an application program, the image setting interface can be triggered and displayed in the personal center interface of the application program; for another example, regardless of the type of the information service interface, the triggering of the display image setting interface can be implemented by pressing any position in the display area where the target avatar is located in the information service interface; and so on. Referring to fig. 5d, and taking the information service interface as an interface of the social application as an example, a brief description is given to an implementation manner of triggering and displaying the character setting interface, as shown in fig. 5d, a character setting option 5061 is provided in a personal center interface 506 of the social application (if the social application is a WeChat application, the personal center interface 506 may be referred to as a WeChat personal center interface), and when the character setting option 5061 is triggered, the character setting interface 501 may be triggered and displayed. This direct placement of the avatar setting options in a more prominent interface may help the target user find the avatar setting entry more quickly. Of course, the character setting options 5061 are also provided in the more hidden personal profile editing interface 507, and the personal profile editing interface 507 may be triggered to display in the personal hub interface 506. In summary, the embodiment of the present application does not limit the specific implementation manner of triggering the image setting interface.
It should be noted that other implementation manners of step S402 may refer to the related description of the specific implementation process shown in step S202 in the embodiment shown in fig. 2, and are not described herein again.
And S403, controlling the target virtual image to execute the interactive action.
The interactive actions may include a variety of actions (e.g., waving actions, cheering actions, etc.) that each indicate that the target avatar is calling up to the target user. Referring to fig. 5e, when the target avatar 3011 is displayed in the terminal screen, the target avatar 3011 is immediately controlled to perform an interactive action including raising the right arm of the target avatar and making a hand-waving action. Specifically, an exemplary action display process of the control target avatar performing the interactive action may be seen in fig. 5f, as shown in fig. 5f, first, the right hand palm of the control target avatar is opened; secondly, controlling the right arm of the target virtual image to rotate upwards by a fifth angle (such as 60 degrees), and controlling the right forearm to rotate slowly by a positive and negative sixth angle (such as positive and negative 5 degrees) with the elbow as a circle center; finally, the head of the control target avatar is rotated to the right side by a seventh angle (e.g., 20 degrees) along the y-axis; and finally, finishing the process of executing the interactive action by the target virtual image. As described above, the target virtual image is obtained by binding the virtual character model to the skeleton system, and the gesture actions of the target virtual image described in the above process are substantially completed by controlling the skeleton system to execute a series of operations; for the user, it is seen that the target avatar is performing a series of actions, described herein. In addition, which actions are executed by the target avatar in the actual application scene, the execution sequence of each action, and the like are controlled to the end, which is not limited in the embodiments of the present application and is described herein; for example, only arm movement of the target avatar may be controlled without controlling head movement or the like when the target avatar is displayed in the information service interface.
The number of times of performing the interactive action by the control target avatar within a preset time period (e.g., within 24 hours) is not limited in the embodiment of the present application. For example: the embodiment of the application supports that when the information service interface is displayed in the terminal screen every time, the target virtual image is controlled to execute the interactive action. For example, within 10 minutes, the information service interface is opened 8 times, and the target avatar is controlled to perform the interactive action each time the information service interface is opened. The embodiment of the application also supports that when the information service interface is displayed in the terminal screen for the first time within the preset time period, the target virtual image is controlled to execute the interactive action. For example: after the information service interface is opened for the first time within 24 hours, the target avatar can be controlled to perform the interactive action, and the second time, the third time, the fourth time, \8230;, neither of the target avatar can be controlled to perform the interactive action within the 24 hours. The implementation manner of judging whether the information service interface is displayed on the terminal screen for the first time within the preset time period may include: after the target virtual image is displayed in the information service interface, acquiring display history information of the information service interface, wherein the display history information comprises: triggering and displaying the historical triggering time of the information service interface each time within a preset time period; acquiring a target triggering moment of triggering and displaying an information service interface for the last time before displaying a target virtual image; and if the displayed historical information only comprises the target triggering moment, determining that the information service interface is displayed on a terminal screen for the first time within a preset time period, and controlling the target virtual image to execute the interactive action. For example, if the preset time period is 24 hours of a day (e.g., 00-24): within 00-24, the information service interface is triggered to be displayed at the following times of 00: 13, determining that the display history information includes, in addition to the target trigger time 13, 00, other history trigger times, such as 00; on the contrary, if the acquired display history information includes: in 00: 13, determining that the display history information only includes the target trigger time 13, determining that the information service interface is displayed on the terminal screen for the first time within a preset time period, and controlling the target avatar to perform an interactive action at this time.
S404, responding to the acquired target prompting event, controlling the target virtual image to execute an event reminding action corresponding to the target prompting event.
It should be noted that, for a specific implementation manner of step S404, reference may be made to the related description of the specific implementation manner shown in step S203 in the embodiment shown in fig. 2, and details are not described herein again.
In addition, the embodiment of the application also supports playing the prompt animation of the target prompt event in the process of executing the event prompt action by the target avatar, so as to enhance the interaction strength among the target avatar, the target prompt event and the target user, and can more obviously prompt the target user to pay attention to the target prompt event. Wherein, if the target prompt event includes an event in which a new prompt element exists in the information service interface, the prompt animation regarding the target prompt event (i.e., the new prompt element) may include at least one of: (1) controlling the new prompt element to move from the first position to the second position; as shown in fig. 5g, the new cue element 302 appears in the information service interface at a first position, when the new cue element 302 is at the first position, the distance between the new cue element 302 and the target avatar 3011 is far, and the interactivity is weak, so that the new cue element 302 can be controlled to move from the first position to a second position, and when the new cue element 302 is at the second position, the distance between the new cue element 302 and the target avatar 3011 is smaller and the interactivity is stronger than when the new cue element 302 is at the first position. And, (2) an act of performing a target operation on the new hint element, the target operation including any of: a vibration operation, a telescopic operation, and a rotation operation. Taking the new hint element as an example including the information box shown in fig. 5g, the vibration operation may refer to: executing vibration operation on the new prompt element at a fixed or unfixed frequency in the information service interface; the telescopic operation may be: performing scaling processing on the length, the width or the length and the width of a new prompt element (such as an information frame) in an information service interface according to a period; the rotation operation may be: and rotating the new prompt element by a certain rotation angle by taking the central position of the area occupied by the new prompt element as the center of a circle in the information service interface. It is to be understood that the played prompt animation may include other animation forms besides the above two animations, and the embodiment of the present application is not limited to the specific animation form of the prompt animation, which is described herein. The prompt animation can be set by a manager and stored in a configuration file, and when the new prompt element is detected to appear in the information service interface, the prompt animation corresponding to the new prompt element can be directly played.
S405, if any interface element in the information service interface is triggered, controlling the target virtual image to execute a response action related to any interface element.
In specific implementation, if any interface element in the information service interface is triggered, the orientation relationship between any interface element and the target virtual image can be acquired; and controlling the target virtual image to execute a response action related to any interface element according to the acquired orientation relation. When the target user performs trigger operation on any interface element, namely any interface element is triggered, the target virtual image can be controlled to perform response action related to any interface element, so that the trigger operation of the target user is responded, and the interestingness of interface interaction is improved. In specific implementation, if any interface element in the information service interface is triggered, the target avatar can be controlled to execute a response action related to any interface element according to the orientation relationship between any interface element and the target avatar. For example: the information service interface comprises an interface jump option, and when the interface jump option is triggered, the interface jump option can jump from the information service interface to an interface corresponding to the interface jump option; then, when the interface jump option is selected in the information service interface, the mouth of the target avatar may be controlled to perform an opening and closing action (e.g., a close, similar to when a person speaks) to simulate the course of the target avatar speaking, and a message bubble including a prompt for the target user to jump to the next interface is displayed in the display area where the target avatar is located, where the prompt 601 as shown in fig. 6 includes "let us see the next interface bar |)! ". For another example: the information service interface comprises an icon, and the icon has no triggering authority, namely when the target user triggers the icon, the information service interface does not make feedback; then, when the icon is selected in the information service interface, the target avatar may be controlled to be converted from the current emoticon to the suspicious emoticon, and a message bubble including a question mark is displayed in a display area where the target avatar is located to prompt the target user that the icon is not triggered. It should be noted that, the implementation manners of the above-described several target avatars performing the response action are only exemplary, and the specific implementation manner of controlling the target avatar to perform the response action after the interface element in the information service interface is triggered is not limited in the embodiment of the present application, and is described herein.
In summary, the embodiments of the present application respectively give, through the detailed description of what is shown in steps S403 to S405: the specific implementation manner when the control target avatar performs an event reminding action related to the target prompt event, the control target avatar performs an interactive action, and the control target avatar performs a response action related to any interface element. However, it can be understood that, in the process of controlling the target avatar to execute the event reminding action, the interaction action and the response action, there are some implementation manners that may exist when the target avatar executes any one of the three actions; for ease of understanding, the following describes implementations that may exist when the target avatar performs any of the three actions described above, wherein:
(1) In the process of controlling the target virtual image to execute any action, the embodiment of the application also supports the expression of the control target virtual image to change correspondingly. Wherein any action can comprise an event reminding action, an interactive action or a response action; the change of the expression of the target avatar is different according to the difference of the actions performed by the target avatar. Taking any action as an event reminding action as an example, under the condition of acquiring a target prompt event, controlling the expression of the target virtual image to change from the current expression to the target expression; the current expression at this time may refer to: and when the target prompt event is acquired, the expression presented by the target virtual image. Referring to fig. 3d, before the target prompt event is obtained, the expression presented by the target avatar 3011 is smiling; when a target prompt event is acquired, such as a new prompt element 302 appearing in the information service interface, the target avatar 3011 may be controlled to change the expression from smile to puzzlement. The change of the expression can be realized by playing the expression animation; specifically, when a target prompt event is acquired, an expression animation set for the target avatar 3011 and related to the target prompt event may be played; in this way, the emotion conveyed by the target avatar 3011 can be enriched better, where as shown in fig. 1a, the played expression animation is stored in a configuration file, and when a target prompt event is obtained, a dynamic effect instruction can be obtained to control the expression change of the target avatar. Taking any action as an interactive action as an example, when an information service interface is displayed in a terminal screen, the expression of the control target virtual image is changed from the reference expression to the designated expression. The reference expression at this time may refer to: and the facial expressions which are set by the manager or the target user and are displayed by default when the target virtual image is displayed on the information service interface, wherein the reference facial expressions comprise smiles and the like. With continued reference to fig. 5e, when the target avatar is displayed in the information service interface, the emoticons set for the target avatar to be played when the target avatar is displayed in the information service interface may be played.
Or, under the condition of obtaining the target prompt event, the current mood state of the target user can be obtained; further controlling the expression of the target virtual image to change from the current expression to a target expression related to the current mood state; the current expression at this time may refer to: and when the target prompt event is acquired, the expression presented by the target virtual image. The process of obtaining the current mood state of the target user may include, but is not limited to: (1) acquiring a facial image of a target user by calling a camera assembly (such as a camera), and performing expression recognition processing on the facial image to obtain a facial expression of the target user; and predicting the current mood state of the target user based on the facial expression. For example: when the facial expression of the target user is smiled in the face area obtained through facial expression recognition processing, predicting the current mood state of the target user to be happy based on the facial expression; another example is: when the facial expression of the target user is beep or cry after facial expression recognition processing, predicting that the current mood state of the target user is low or sad based on the facial expression; and so on. (2) Alternatively, historical behavior data of the target user is obtained, and the historical behavior data may include any one or more of the following: audio and video playing data, text editing data, social data and the like; and performing emotion recognition on the target user according to the historical behavior data to obtain the current mood state of the target user. For example: the acquired historical behavior data of the target user indicates that: if the type of the audio (such as music) or video played by the target user recently is cheerful, determining that the current mood state of the target user is happy; for another example: the acquired historical behavior data of the target user indicates that: determining the current mood state of the target user to be low if the types of the historical documents, such as novels, articles and the like, browsed by the target user are more dull; the following steps are repeated: the acquired historical behavior data of the target user indicates that: and if the target user publishes a happy social dynamic state (such as a friend circle, state information published on a personal homepage of the target user and the like) such as sunning of gourmet food, sunning of tourism pictures and the like, determining that the current mood state of the target user is happy. Through the process, the target virtual image can be controlled to execute the target expression related to the current mood state of the target user; for example, if the current mood state of the target user is detected to be happy, the target virtual image can be controlled to output happy expressions; if the current mood state of the target user is detected to be low, the target virtual image can be controlled to output smiling (or throwing mouth) expression, and the target virtual image is controlled to make a refueling posture; and so on.
The above description is introduced by acquiring the current mood state of the target user when the target prompt event is acquired, but it can be understood that the current mood state of the target user can be acquired when the target prompt event is not acquired, and the target avatar is controlled to change from the current mood state to the target expression related to the current mood state of the target user. In addition, the embodiment of the present application does not limit the change situation of the target avatar expression, and other adaptive schemes besides the above example are all applied to the embodiment of the present application, which is described here.
(2) After the control target avatar performs any action and maintains the pose at the end of any action for a certain time period, the embodiment of the application also supports the control target avatar to restore to the pose before performing any action, and the pose may include the position and the pose of the target avatar in the information service interface. For example, any action is taken as an event reminding action, and when a target virtual image appears in a terminal screen, the target virtual image is in a reference pose; after the target virtual image executes the event reminding action, the target virtual image is in a target pose; and counting the duration of the target virtual image in the target pose, and if the duration is greater than a duration threshold, controlling the target virtual image to recover from the target pose to the reference pose. As shown in fig. 3d, assume that the target cue event includes: when the new prompt element appears in the information service interface, the target virtual image is in a target pose, the target virtual image is controlled to execute an event reminding action, and after the target virtual image executes the event reminding action, the target pose of the target virtual image comprises the following steps: the right hand arm points to the new prompt element; and when the duration of the target virtual image in the target pose is greater than the duration threshold, if the duration threshold is 5 seconds, determining that the duration is greater than the duration threshold when the duration is 5.1 seconds, and controlling the target virtual image to recover to the reference position from the target pose. It should be noted that, after the control target avatar performs the event reminding action and the response action, the process of the control target avatar restoring to the reference pose is similar, and will not be elaborated herein. In addition, the specific value of the duration threshold may be related to the type of the executed action, and the specific value of the duration threshold is not limited in the embodiment of the present application.
(3) In the process of controlling the target avatar to execute any action, the embodiment of the application also supports displaying action description information related to any action in a display area where the target avatar is located, wherein the action description information is used for describing the execution purpose of any action; wherein any action comprises an event reminding action, an interactive action or a response action. As shown in fig. 7, when any action is an interactive action, the action description information of the interactive action may include "hi", that is, the action description information is used to describe that the interactive action is calling with the target user. When any action is taken as the event reminding action, the action description information of the event reminding action can be displayed in the display area of the target virtual image, for example, a message bubble is displayed in the display area of the target virtual image, and the action description information is displayed in the message bubble, for example, the action description information is' please improve personal data as soon as possible! ", i.e., action description information, is used to prompt the target user to refine the profile.
(4) In the process of controlling the target virtual image to execute any action, the embodiment of the application also supports outputting the target voice audio, and the target voice audio is generated according to the action description information of any action; wherein any action includes: an event reminder action, an interactive action, or a response action. For example, when any action is taken as an interactive action and the action description information of the interactive action is 'hi', the target voice audio can be output, the content of the target voice audio comprises 'hi', and the mode of outputting the animation description information through the voice audio can give a target user a more intuitive prompt. In addition, if the information service interface is an interface in a target application with respect to a target game, a sound effect of the target voice audio, which is output, is matched with a sound effect of a virtual game object used by a target user in the target game. For example, assume that the target game includes virtual game object 1, virtual game object 2, and virtual game object 3, and that the virtual game object used by the target user in the target game is virtual game object 1; the sound effect of the output target voice audio matches the sound effect of the virtual game object 1.
It is understood that, in the process of executing any action by the control target avatar, the embodiment of the present application is not limited to executing only one or more implementations described above; in an actual application scenario, other action performance forms may exist in the process of controlling the target avatar to execute any action, and the embodiment of the present application does not limit this.
In addition, if the information service interface is an interface in a target application related to a target game, in the process that a target user and a target virtual image participate in interaction deeply, the game invitation can be pulled up timely. In the specific implementation, in the process of displaying a target virtual image in an information service interface, if a game invitation triggering event exists, a game invitation notification is output to notify the target virtual image to invite a target user to participate in a target game; if the game invitation notification is triggered, a game screen regarding the target game is output. The game invitation triggering event may refer to an event generated when the game invitation prompt is initiated by the background every interval time period, and the like. An exemplary game invitation process may be seen in fig. 8, as shown in fig. 8, if there is a game invitation triggering event in the case that a target avatar is displayed in an information service interface, a game invitation notification 801 is output in the information service interface, the game invitation notification 801 is displayed in a display area where the target avatar 3011 is located in a message bubble form; if the game invitation notification 801 is triggered, a game screen regarding the target game is output in the terminal screen, indicating that the target user starts participating in the target game. Of course, fig. 8 is only an exemplary game invitation process, and the process of how to invite the target user to participate in the target game is not limited in the embodiment of the present application.
In addition to the above description, the background invites the target user to participate in the target game, the embodiment of the present application further supports sending a prompt message by the target avatar, where the prompt message is used to prompt the target user that the target user may invite other users to participate in the target game. For example, when the game invitation notification 801 is triggered, a contact list may be output in a floating window form or a separate interface form in the terminal screen, where the contact list includes other target users having a relationship with the target user, such as a friend relationship, a session relationship, a colleague relationship, and the like; the target user may select any user identification from the contact list and a determination is made to send a game invitation notification to the selected any target user identification. Through the process, the game invitation is timely pulled up in the process that the target user and the target virtual image deeply participate in interaction, so that the participation degree of the target user in the target game can be improved, and the convenience of the target user in the target game is improved.
In addition, in addition to the above-described embodiments, after the information service interface is displayed, the target avatar is always displayed in the information service interface, and the embodiments of the present application also support deleting the target avatar in the information service interface after the target avatar finishes executing any action (such as an event prompt action, an interaction action, and a response action); and when a new prompt event is acquired, displaying the target virtual image in the information service interface. In other words, when a target prompt event is acquired, the target avatar can be displayed in the information service interface, and after the control target avatar performs an action (such as an event reminding action) in the information service interface, the target avatar is deleted again in the information service interface; and if a new prompt event is received again, displaying the target virtual image in the information service interface, and controlling the target virtual image to execute the action corresponding to the new prompt event. According to the process, the mode that the target virtual image is output in the information service interface after the new prompt event is obtained can enhance the attention and substitution feeling of the user to the target virtual image, and further improve the attention of the user to the prompt event.
In the embodiment of the application, when the target prompt event is acquired, the target virtual image can be controlled to execute the event reminding action corresponding to the target prompt event so as to guide a target user to pay attention to the target prompt event, such as attention to a prompt element newly appearing in an information service interface; when the information service interface is displayed in the terminal screen and the target virtual image is displayed in the information service interface, the target virtual image can be controlled to execute interactive action, the attention of a target user can be attracted to the target virtual image from the beginning, the substituting sense of the target user for the target virtual image is improved, and the operation experience is more immersive; when the triggering operation of any interface element in the information service interface exists, the control target virtual image is supported to execute a response action so as to respond to the triggering operation of the target user and enhance the interaction between the target virtual image and the target user. By combining the above description, the target avatar is newly added to the information service interface, and the interaction between the target avatar and the target user is performed, so that the operation experience of the target user can be enriched, and the stickiness of the target user can be improved.
While the method of the embodiments of the present application has been described in detail above, to facilitate better implementation of the above-described aspects of the embodiments of the present application, the apparatus of the embodiments of the present application is provided below accordingly.
Fig. 9 is a schematic structural diagram illustrating an event notification processing apparatus according to an exemplary embodiment of the present application, where the event notification processing apparatus may be a computer program (including program code) running in a terminal; the event alert processing apparatus may be used to perform some or all of the steps in the method embodiments shown in fig. 2 and 4. Referring to fig. 9, the event notification processing apparatus includes the following units:
a display unit 901, configured to display an information service interface;
the display unit 901 is further configured to display a target avatar in the information service interface;
and the processing unit 902 is configured to, in response to the obtained target prompt event, control the target avatar to execute an event reminding action corresponding to the target prompt event.
In an implementation manner, when the processing unit 902 is configured to control the target avatar to execute the event notification action corresponding to the target prompt event, the processing unit is specifically configured to:
controlling the target virtual image to be adjusted from the current posture to a target posture so as to enable the target virtual image to execute an event reminding action corresponding to the target reminding event;
wherein, the current posture refers to: when a target prompt event is obtained, the posture of the target virtual image is located; the target attitude means: and determining the posture according to the target prompt event.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit 902 is configured to, when controlling the target avatar to adjust from the current pose to the target pose, specifically:
acquiring a first position coordinate of the new prompt element in the information service interface and a second position coordinate of the target virtual image in the information service interface;
calculating a target orientation relation between the target avatar and the new prompt element according to the first position coordinate and the second position coordinate; the target orientation relationship indicates: the new prompt element is positioned in the target direction of the target virtual image;
controlling the target virtual image to be adjusted from the current posture to the target posture according to the target azimuth relation; the target pose includes: a gesture in which one or more body parts in the target avatar are oriented to match the target direction.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit 902 is configured to, when controlling the target avatar to execute the event reminding action corresponding to the target reminding event, specifically:
moving the target avatar from the current position to the target position to enable the target avatar to execute an event reminding action corresponding to the target reminding event;
wherein, the current position refers to: when a target prompt event is obtained, the position of the target virtual image is located; the target position is: and the position of the new prompt element in the information service interface.
In one implementation, the processing unit 902 is further configured to:
in the process of controlling the execution of the target virtual image and the event reminding action, or within a preset time length after the target virtual image finishes the event reminding action, if the target virtual image is triggered, an event detail interface of the target reminding event is output.
In one implementation, the processing unit 902 is further configured to:
under the condition that the target virtual image does not execute the event reminding action, if the target virtual image is detected to be triggered, outputting an event identification list; the list of event identifications includes: one or more event identifications corresponding to historical prompting events, wherein the historical prompting events refer to: a prompt event acquired before the target avatar is triggered;
when any event identification in the event identification list is selected, an event detail interface of the history prompt event indicated by the selected event identification is displayed.
In one implementation, the processing unit 902 is further configured to:
deleting the target virtual image in the information service interface after the target virtual image finishes executing the event reminding action;
and when a new prompt event is acquired, displaying a target virtual image in the information service interface.
In one implementation, the processing unit 902 is further configured to:
under the condition that a target prompt event is obtained, the current mood state of a target user is obtained;
controlling the expression of the target virtual image to change from the current expression to a target expression related to the current mood state;
wherein, the current expression means: and when the target prompt event is acquired, the expression presented by the target virtual image.
In an implementation manner, when the processing unit 902 is configured to obtain the current mood state of the target user, it is specifically configured to:
calling a camera shooting assembly to acquire a face image of a target user; performing expression recognition processing on the face image to obtain the face expression of the target user; predicting the current mood state of the target user based on the facial expression;
or acquiring historical behavior data of the target user, wherein the historical behavior data comprises any one or more of the following: audio and video playing data, text editing data and social data; and performing emotion recognition on the target user according to the historical behavior data to obtain the current mood state of the target user.
In one implementation, the processing unit 902 is further configured to:
acquiring display history information of an information service interface, wherein the display history information comprises: triggering and displaying the historical triggering time of the information service interface each time within a preset time period;
acquiring a target triggering moment of triggering and displaying an information service interface for the last time before displaying a target virtual image;
and if the displayed historical information only comprises the target triggering moment, determining that the information service interface is displayed on a terminal screen for the first time within a preset time period, and controlling the target virtual image to execute the interactive action.
In one implementation, the processing unit 902 is further configured to:
if any interface element in the information service interface is triggered, acquiring the orientation relation between any interface element and the target virtual image;
and controlling the target virtual image to execute a response action related to any interface element according to the acquired orientation relation.
In one implementation, the processing unit 902 is further configured to:
in the process that the target virtual character executes any action, displaying action description information related to any action in a display area where the target virtual character is located, wherein the action description information is used for describing the execution purpose of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, the processing unit 902 is further configured to:
outputting a target voice audio in the process that the target virtual image executes any action, wherein the target voice audio is generated according to the action description information of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, the processing unit 902 is further configured to:
displaying an image setting interface, wherein the image setting interface comprises a reference virtual image and one or more candidate skin resources;
when a target skin resource is selected from one or more candidate skin resources, updating and displaying a reference virtual image by adopting the target skin resource in an image setting interface;
and if the confirmation operation aiming at the updated reference virtual image is detected, taking the updated reference virtual image as the target virtual image.
In one implementation, when a target avatar appears in an information service interface, the target avatar is in a reference pose; after the target virtual image executes the event reminding action, the target virtual image is in a target pose; the processing unit 902 is further configured to:
counting the duration of the target virtual image in the target pose;
and if the duration is greater than the duration threshold, controlling the target virtual image to recover from the target pose to the reference pose.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; the processing unit 902 is further configured to:
playing a prompt animation of a new prompt element in the process of executing the event prompt action by the target virtual image;
wherein the prompt animation includes at least one of: an animation that controls the new cueing element to move from the first position to the second position; and performing animation of a target operation on the new prompt element, the target operation including any one or more of: a vibration operation, a telescopic operation, and a rotation operation.
In one implementation, the information service interface is an interface in a target application related to a target game, and the processing unit 902 is further configured to:
in the process of displaying the target virtual image, if a game invitation triggering event exists, outputting a game invitation notification to notify the target virtual image to invite a target user to participate in a target game;
if the game invitation notification is triggered, a game screen regarding the target game is output.
According to an embodiment of the present application, the units in the event notification processing apparatus shown in fig. 9 may be respectively or entirely combined into one or several other units to form the event notification processing apparatus, or some of the unit(s) may be further split into multiple functionally smaller units to form the event notification processing apparatus, which may implement the same operation without affecting implementation of technical effects of the embodiment of the present application. The units are divided based on logic functions, and in practical applications, the functions of one unit can also be implemented by a plurality of units, or the functions of a plurality of units can also be implemented by one unit. In other embodiments of the present application, the event prompt processing apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of multiple units. According to another embodiment of the present application, the event alert processing apparatus shown in fig. 9 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods shown in fig. 2 and 4 on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a processing element such as a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and implementing the event alert processing method of the embodiment of the present application. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
In this embodiment, when the display unit 901 is used to display an information service interface in a terminal screen, the display unit 901 may further output a target avatar in the terminal screen, so as to enrich the types of elements in the terminal screen and increase the interest of interface browsing. When acquiring a target prompt event, if it is detected that a new prompt element exists in the information service interface, the processing unit 902 controls the target avatar to execute an event prompt action corresponding to the target prompt event; therefore, the substitution feeling of the target user to the target virtual image is increased through the interaction between the target virtual image and the target user, and the target user is guided to pay attention to the new prompt element through the target virtual image, so that the target user can pay attention to the new prompt element in time, the prompt timeliness of the new prompt element is effectively improved, the new prompt element is prevented from being omitted, and the attention of the new prompt element is improved.
Fig. 10 is a schematic structural diagram illustrating an event notification processing device according to an exemplary embodiment of the present application, where the event notification processing device may refer to the aforementioned terminal. Referring to fig. 10, the event alert processing apparatus (or terminal) includes a processor 1001, a communication interface 1002, and a computer-readable storage medium 1003. The processor 1001, the communication interface 1002, and the computer-readable storage medium 1003 may be connected by a bus or other means. The communication interface 1002 is used, among other things, for receiving and transmitting data. The computer-readable storage medium 1003 may be stored in a memory of the terminal, the computer-readable storage medium 1003 being used to store a computer program comprising program instructions, the processor 1001 being used to execute the program instructions stored by the computer-readable storage medium 1003. The processor 1001 (or CPU) is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function.
An embodiment of the present application further provides a computer-readable storage medium (Memory), which is a Memory device in the terminal and is used for storing programs and data. It is understood that the computer readable storage medium herein can include both a built-in storage medium in the terminal and, of course, an extended storage medium supported by the terminal. The computer readable storage medium provides a storage space that stores a processing system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 1001. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; optionally, at least one computer readable storage medium located remotely from the aforementioned processor is also possible.
In one embodiment, the computer-readable storage medium has one or more instructions stored therein; one or more instructions stored in the computer-readable storage medium are loaded and executed by the processor 1001 to implement the corresponding steps in the above-described embodiment of the event hint processing method; in particular implementations, one or more instructions in the computer-readable storage medium are loaded and executed by the processor 1001 as follows:
displaying an information service interface;
displaying a target virtual image in an information service interface;
and responding to the acquired target prompt event, and controlling the target virtual image to execute an event reminding action corresponding to the target prompt event.
In one implementation, when one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and execute the event reminding action corresponding to the target prompt event, the following steps are specifically executed:
controlling the target virtual image to be adjusted from the current posture to a target posture so as to enable the target virtual image to execute an event reminding action corresponding to the target reminding event;
wherein, the current posture refers to: when a target prompt event is obtained, the posture of the target virtual image is located; the target pose is: and determining the posture according to the target prompt event.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when executing the adjustment of the control target avatar from the current pose to the target pose, the following steps are specifically executed:
acquiring a first position coordinate of the new prompt element in the information service interface and a second position coordinate of the target virtual image in the information service interface;
calculating a target orientation relation between the target avatar and the new prompt element according to the first position coordinate and the second position coordinate; the target orientation relationship indicates: the new prompt element is positioned in the target direction of the target virtual image;
controlling the target virtual image to be adjusted from the current posture to the target posture according to the target azimuth relation; the target pose includes: a gesture in which one or more body parts in the target avatar are oriented to match the target direction.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when executing an event reminding action that controls the target avatar to execute an event corresponding to the target reminding event, the following steps are specifically executed:
moving the target avatar from the current position to the target position to enable the target avatar to execute an event reminding action corresponding to the target reminding event;
wherein, the current position refers to: when a target prompt event is obtained, the position of the target virtual image is located; the target position is: and the position of the new prompt element in the information service interface.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by processor 1001 and perform the following further steps:
in the process of controlling the execution of the target virtual image and the event reminding action, or within a preset time length after the target virtual image finishes the event reminding action, if the target virtual image is triggered, an event detail interface of the target reminding event is output.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by processor 1001 and perform the following further steps:
under the condition that the target virtual image does not execute the event reminding action, if the target virtual image is detected to be triggered, outputting an event identification list; the event identification list includes: one or more event identifications corresponding to historical prompting events, wherein the historical prompting events refer to: a prompt event acquired before the target avatar is triggered;
when any event identification in the event identification list is selected, an event detail interface of the history prompt event indicated by the selected event identification is displayed.
In one implementation, one or more instructions in the computer readable storage medium are loaded by the processor 1001 and further perform the steps of:
deleting the target virtual image in the information service interface after the target virtual image finishes executing the event reminding action;
and when a new prompt event is acquired, displaying a target virtual image in the information service interface.
In one implementation, one or more instructions in the computer readable storage medium are loaded by the processor 1001 and further perform the steps of:
under the condition that a target prompt event is obtained, the current mood state of a target user is obtained;
controlling the expression of the target virtual image to change from the current expression to a target expression related to the current mood state;
wherein, the current expression means: and when the target prompt event is acquired, the expression presented by the target virtual image.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when executing the step of obtaining the current mood status of the target user, are specifically configured to perform the following steps:
calling a camera shooting assembly to acquire a face image of a target user; performing expression recognition processing on the facial image to obtain the facial expression of the target user; predicting the current mood state of the target user based on the facial expression;
or acquiring historical behavior data of the target user, wherein the historical behavior data comprises any one or more of the following: audio and video playing data, text editing data and social data; and performing emotion recognition on the target user according to the historical behavior data to obtain the current mood state of the target user.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by processor 1001 and perform the following further steps:
acquiring display history information of an information service interface, wherein the display history information comprises: triggering and displaying the historical triggering time of the information service interface each time within a preset time period;
acquiring a target triggering moment of triggering and displaying an information service interface for the last time before displaying a target virtual image;
and if the displayed historical information only comprises the target triggering moment, determining that the information service interface is displayed on a terminal screen for the first time within a preset time period, and controlling the target virtual image to execute the interactive action.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by processor 1001 and perform the following further steps:
if any interface element in the information service interface is triggered, acquiring the orientation relation between any interface element and the target virtual image;
and controlling the target virtual image to execute a response action related to any interface element according to the acquired orientation relation.
In one implementation, one or more instructions in the computer readable storage medium are loaded by the processor 1001 and further perform the steps of:
in the process that the target virtual character executes any action, displaying action description information related to any action in a display area where the target virtual character is located, wherein the action description information is used for describing the execution purpose of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by processor 1001 and perform the following further steps:
outputting a target voice audio in the process that the target virtual image executes any action, wherein the target voice audio is generated according to the action description information of any action;
wherein any action includes: an event reminder action, an interactive action, or a response action.
In one implementation, one or more instructions in the computer readable storage medium are loaded by the processor 1001 and further perform the steps of:
displaying an image setting interface, wherein the image setting interface comprises a reference virtual image and one or more candidate skin resources;
when a target skin resource is selected from one or more candidate skin resources, updating and displaying a reference virtual image by adopting the target skin resource in an image setting interface;
and if the confirmation operation aiming at the updated reference virtual image is detected, taking the updated reference virtual image as the target virtual image.
In one implementation, when a target avatar appears in an information service interface, the target avatar is in a reference pose; after the target virtual image executes the event reminding action, the target virtual image is in a target pose; one or more instructions in the computer readable storage medium are loaded by the processor 1001 and further perform the following steps:
counting the duration of the target virtual image in the target pose;
and if the duration is greater than the duration threshold, controlling the target virtual image to recover to the reference pose from the target pose.
In one implementation, the target cue event includes: an event of a new prompt element exists in the information service interface; one or more instructions in the computer readable storage medium are loaded by the processor 1001 and perform the following steps:
playing a prompt animation of a new prompt element in the process of executing the event prompt action by the target virtual image;
wherein the prompt animation includes at least one of: an animation that controls the new cueing element to move from the first position to the second position; and performing animation of a target operation on the new prompt element, the target operation including any one or more of: a vibration operation, a telescopic operation, and a rotation operation.
In one implementation, where the information service interface is an interface in a target application for a target game, one or more instructions in a computer readable storage medium are loaded by processor 1001 and perform the following further steps:
in the process of displaying the target virtual image, if a game invitation triggering event exists, outputting a game invitation notification to notify the target virtual image to invite a target user to participate in a target game;
if the game invitation notification is triggered, a game screen regarding the target game is output.
In this embodiment, when the processor 1001 detects that the information service interface is displayed in the terminal screen, it may further output a target avatar in the terminal screen, so as to enrich the types of elements in the terminal screen and increase the interest of interface browsing. When the processor 1001 acquires a target prompting event, if it is detected that a new prompting element exists in the information service interface, the processor 1001 further controls the target avatar to execute an event reminding action corresponding to the target prompting event; therefore, the target user is provided with a sense of substitution into the target virtual image through the interaction between the target virtual image and the target user, and then the target user is guided to pay attention to the new prompt element through the target virtual image, so that the target user can pay attention to the new prompt element in time, the prompt timeliness of the new prompt element is effectively improved, the new prompt element is prevented from being omitted, and the attention of the new prompt element is improved.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the event prompt processing device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the event prompt processing device executes the event prompt processing method.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the invention are all or partially effected when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid State Disks (SSDs)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. An event prompt processing method, comprising:
displaying an information service interface;
displaying a target virtual image in the information service interface;
and responding to the acquired target prompt event, and controlling the target virtual image to execute an event reminding action corresponding to the target prompt event.
2. The method of claim 1, wherein said controlling said target avatar to perform an event reminder action corresponding to said target reminder event comprises:
controlling the target virtual image to be adjusted from the current posture to a target posture so as to enable the target virtual image to execute an event reminding action corresponding to the target reminding event;
wherein the current posture refers to: when the target prompt event is obtained, the posture of the target virtual image is located; the target posture is as follows: and determining the posture according to the target prompt event.
3. The method of claim 2, wherein the target cue event comprises: an event of a new prompt element exists in the information service interface; the controlling the target avatar to adjust from a current pose to a target pose includes:
acquiring a first position coordinate of the new prompt element in the information service interface and a second position coordinate of the target avatar in the information service interface;
calculating a target orientation relationship between the target avatar and the new prompt element according to the first position coordinate and the second position coordinate; the target orientation relationship indicates: the new prompt element is positioned in the target direction of the target avatar;
controlling the target virtual image to be adjusted from the current posture to the target posture according to the target azimuth relation; the target pose includes: a gesture in which the orientation of one or more body parts in the target avatar matches the target direction.
4. The method of claim 1, wherein the target cue event comprises: an event of a new prompt element exists in the information service interface; the controlling the target avatar to execute an event reminding action corresponding to the target reminding event includes:
moving the target avatar from a current position to a target position to enable the target avatar to execute an event reminding action corresponding to the target reminding event;
wherein the current position refers to: when the target prompt event is obtained, the position of the target virtual image is located; the target position is as follows: and the position of the new prompt element in the information service interface.
5. The method of any one of claims 1-4, further comprising:
and in the process of controlling the target virtual image to execute the event reminding action, or within a preset time after the target virtual image finishes executing the event reminding action, if the target virtual image is triggered, outputting an event detail interface of the target reminding event.
6. The method of any one of claims 1-4, further comprising:
under the condition that the target virtual image does not execute the event reminding action, if the target virtual image is detected to be triggered, outputting an event identification list; the event identification list includes: one or more event identifications corresponding to historical prompting events, wherein the historical prompting events refer to: a prompt event acquired before the target avatar is triggered;
when any event identifier in the event identifier list is selected, an event detail interface of the history prompt event indicated by the selected event identifier is displayed.
7. The method of any one of claims 1-4, further comprising:
deleting the target virtual image in the information service interface after the target virtual image finishes executing the event reminding action;
and when a new prompt event is acquired, displaying the target virtual image in the information service interface.
8. The method of any one of claims 1-4, further comprising:
under the condition that the target prompt event is obtained, obtaining the current mood state of a target user;
controlling the expression of the target virtual image to change from the current expression to a target expression related to the current mood state;
wherein the current expression refers to: and when the target prompt event is acquired, the expression presented by the target virtual image.
9. The method of claim 8, wherein the obtaining the current mood state of the target user comprises:
calling a camera shooting assembly to collect a face image of the target user; performing expression recognition processing on the facial image to obtain the facial expression of the target user; predicting the current mood state of the target user based on the facial expression;
or acquiring historical behavior data of the target user, wherein the historical behavior data comprises any one or more of the following: audio and video playing data, text editing data and social data; and performing emotion recognition on the target user according to the historical behavior data to obtain the current mood state of the target user.
10. The method of claim 1, wherein after displaying the target avatar in the information service interface, further comprising:
acquiring display history information of the information service interface, wherein the display history information comprises: the historical trigger time of each triggered display of the information service interface in a preset time period;
acquiring a target trigger moment for triggering and displaying the information service interface for the last time before the target virtual image is displayed;
and if the historical display information only comprises the target trigger moment, determining that the information service interface is displayed on a terminal screen for the first time in the preset time period, and controlling the target virtual image to execute an interactive action.
11. The method of claim 1, wherein the method further comprises:
if any interface element in the information service interface is triggered, acquiring the orientation relation between the any interface element and the target virtual image;
and controlling the target virtual image to execute a response action related to any interface element according to the acquired orientation relation.
12. The method of claim 1, 10 or 11, further comprising:
in the process of executing any action by the target avatar, displaying action description information about the any action in a display area where the target avatar is located, wherein the action description information is used for describing the execution purpose of the any action;
wherein the any action comprises: the event reminding action, the interactive action or the response action.
13. The method of claim 1, 10 or 11, further comprising:
outputting a target voice audio generated according to the action description information of any action during the process that the target avatar performs any action;
wherein the any action comprises: the event prompts an action, an interactive action, or a response action.
14. The method of claim 1, wherein the method further comprises:
displaying an image setting interface, wherein the image setting interface comprises a reference virtual image and one or more candidate skin resources;
when a target skin resource is selected from the one or more candidate skin resources, updating and displaying the reference virtual image by adopting the target skin resource in the image setting interface;
and if the confirmation operation aiming at the updated reference virtual image is detected, taking the updated reference virtual image as the target virtual image.
15. The method of claim 1, wherein the target avatar is in a reference pose when the target avatar appears in the information service interface; after the target virtual image executes the event reminding action, the target virtual image is in a target pose; the method further comprises the following steps:
counting the duration of the target virtual image in the target pose;
and if the duration is greater than a duration threshold, controlling the target virtual image to recover from the target pose to the reference pose.
16. The method of claim 1, wherein the target cue event comprises: an event of a new prompt element exists in the information service interface; the method further comprises the following steps:
playing a prompt animation of the new prompt element in the process of executing the event prompt action by the target virtual image;
wherein the prompt animation includes at least one of: an animation that controls the new cue element to move from a first position to a second position; and performing animation of a target operation on the new prompt element, the target operation including any one or more of: a vibration operation, a telescopic operation, and a rotation operation.
17. The method of claim 1, wherein the information service interface is an interface in a target application for a target game, the method further comprising:
in the process of displaying the target virtual image, if a game invitation triggering event exists, outputting a game invitation notification to notify the target virtual image to invite a target user to participate in the target game;
and if the game invitation notification is triggered, outputting a game picture related to the target game.
18. An event alert processing apparatus, comprising:
the display unit is used for displaying the information service interface;
the display unit is also used for displaying a target virtual image in the information service interface;
and the processing unit is used for responding to the acquired target prompting event and controlling the target virtual image to execute an event reminding action corresponding to the target prompting event.
19. An event alert processing apparatus, comprising:
a processor adapted to execute a computer program;
computer-readable storage medium, in which a computer program is stored which, when being executed by the processor, carries out the method of processing event cues according to any one of claims 1 to 17.
20. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded by a processor and to perform the method of event cue processing according to any of claims 1-17.
CN202110412161.1A 2021-04-16 2021-04-16 Event prompt processing method, device, equipment and medium Pending CN115220613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110412161.1A CN115220613A (en) 2021-04-16 2021-04-16 Event prompt processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412161.1A CN115220613A (en) 2021-04-16 2021-04-16 Event prompt processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115220613A true CN115220613A (en) 2022-10-21

Family

ID=83605070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412161.1A Pending CN115220613A (en) 2021-04-16 2021-04-16 Event prompt processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115220613A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173960A1 (en) * 2022-03-15 2023-09-21 北京字节跳动网络技术有限公司 Task information display method and apparatus, and computer device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173960A1 (en) * 2022-03-15 2023-09-21 北京字节跳动网络技术有限公司 Task information display method and apparatus, and computer device and storage medium

Similar Documents

Publication Publication Date Title
US10997768B2 (en) Emoji recording and sending
US10097492B2 (en) Storage medium, communication terminal, and display method for enabling users to exchange messages
US20050248574A1 (en) Method and apparatus for providing flash-based avatars
US20050216529A1 (en) Method and apparatus for providing real-time notification for avatars
WO2021012836A1 (en) Interface display method and apparatus, terminal, and storage medium
EP3000010A2 (en) Method, user terminal and server for information exchange communications
US20050223328A1 (en) Method and apparatus for providing dynamic moods for avatars
CN105474157A (en) Mobile device interfaces
US20160231878A1 (en) Communication system, communication terminal, storage medium, and display method
CN111862280A (en) Virtual role control method, system, medium, and electronic device
CN111643890A (en) Card game interaction method and device, electronic equipment and storage medium
CN107864681A (en) Utilize the social network service system and method for image
CN113973223A (en) Data processing method, data processing device, computer equipment and storage medium
CN113126875B (en) Virtual gift interaction method and device, computer equipment and storage medium
CN115220613A (en) Event prompt processing method, device, equipment and medium
CN109766046B (en) Interactive operation execution method and device, storage medium and electronic device
CN113440848A (en) In-game information marking method and device and electronic device
US9569075B2 (en) Information-processing device, information-processing system, storage medium, and information-processing method
WO2018236601A1 (en) Context aware digital media browsing and automatic digital media interaction feedback
CN116843802A (en) Virtual image processing method and related product
CN114116105A (en) Control method and device of dynamic desktop, storage medium and electronic device
CN115687816A (en) Resource processing method and device
CN113350801A (en) Model processing method and device, storage medium and computer equipment
CN114697442B (en) Schedule generation method and device, terminal and storage medium
JP7162737B2 (en) Computer program, server device, terminal device, system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40075314

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination