CN110812843B - Interactive method and device based on virtual image and computer storage medium - Google Patents

Interactive method and device based on virtual image and computer storage medium Download PDF

Info

Publication number
CN110812843B
CN110812843B CN201911046918.9A CN201911046918A CN110812843B CN 110812843 B CN110812843 B CN 110812843B CN 201911046918 A CN201911046918 A CN 201911046918A CN 110812843 B CN110812843 B CN 110812843B
Authority
CN
China
Prior art keywords
user
material library
interface
avatar
calling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046918.9A
Other languages
Chinese (zh)
Other versions
CN110812843A (en
Inventor
闫羽婷
戴世昌
张军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911046918.9A priority Critical patent/CN110812843B/en
Publication of CN110812843A publication Critical patent/CN110812843A/en
Application granted granted Critical
Publication of CN110812843B publication Critical patent/CN110812843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/301Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device using an additional display connected to the game console, e.g. on the controller

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method based on an avatar, which comprises the following steps: acquiring the current time of a system and historical behavior data of a user; according to the current time of the system, a corresponding interface scene is called, and according to the historical behavior data of the user, a corresponding personalized message and an avatar are called; and displaying the interface scene, the personalized message and the avatar on a display interface together. The automatic switching of the interface scene is realized based on the current time of the system, and the automatic switching of the virtual image is realized based on the historical behavior data of the user, so that the user is not required to operate. And personalized information can be automatically pushed to the user based on the historical behavior data of the user, so that the interaction between initiative and the user is realized, and the man-machine interaction mode is more intelligent. In another aspect, the present application also provides an avatar-based interactive apparatus and a computer storage medium corresponding to the method.

Description

Interactive method and device based on virtual image and computer storage medium
Technical Field
The present application relates to the field of man-machine interaction technologies, and in particular, to an interaction method and apparatus based on an avatar, and a computer storage medium.
Background
Human-computer interaction is one of important functions of intelligent equipment, and in order to improve user experience, an avatar is arranged in some existing intelligent equipment or applications so as to interact with a user to a certain extent through the avatar, so that user experience is provided.
In the prior art, a plurality of scenes and virtual images are preset, when a user triggers an interaction instruction through virtual keys or voice and the like, the system switches the interface scenes and the virtual images according to the interaction instruction triggered by the user, so that interaction with the user is realized. For example, as shown in fig. 1, when the user presses "evening", the interface scene is switched from the daytime interface scene to the evening interface scene, and the avatar is also switched to an avatar with eyes closed.
However, the interaction mode of switching the interface scene and the virtual image is not convenient enough, and cannot actively interact with the user, and is limited by the number of operation instructions, and the types and the number of the interface scene and the virtual image are limited, so that the intelligent degree of the mode is too low, and the user experience cannot be improved well.
Disclosure of Invention
Based on the defects of the prior art, the application provides an interaction method and device based on an avatar and a computer storage medium, so as to solve the problems that the intelligent degree is too low and the user experience cannot be improved well in the interaction mode based on the avatar in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides an avatar-based interactive method, comprising:
acquiring the current time of a system and historical behavior data of a user;
according to the current time of the system, a corresponding interface scene is called, and according to the historical behavior data of the user, a corresponding personalized message and an avatar are called;
and displaying the interface scene, the personalized message and the avatar on a display interface together.
Optionally, in the above method, the calling a corresponding interface scene according to the current time of the system includes:
generating a calling tag corresponding to the current time of the system;
matching a material library label corresponding to the calling label from a plurality of preset material library labels;
based on the matched material library label, calling an interface scene corresponding to the material library label from a material library; and a plurality of interface scenes are preset in the material library.
Optionally, in the above method, the generating a call tag corresponding to the current time of the system includes:
determining a current season and a current period according to the current time of the system; wherein the time period includes morning, noon, afternoon, and evening;
and setting the current season and the current period as a calling tag.
Optionally, in the above method, the retrieving the corresponding personalized message and avatar according to the historical behavior data of the user includes:
selecting a personalized message conforming to the historical behavior of the user from a material library by analyzing the time-efficient data in the historical behavior data of the user;
according to the keywords in the personalized message, a corresponding dynamic virtual image is called from the material library;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic avatar corresponds to at least one keyword.
Optionally, in the above method, the selecting a personalized message from a material library by analyzing recent time-efficient data in the historical behavior data of the user includes:
obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and selecting a personalized message which accords with the game type, the game time and the game result from the material library.
A second aspect of the present application provides an avatar-based interactive apparatus, comprising:
the acquisition unit is used for acquiring the current time of the system and the historical behavior data of the user;
the first calling unit is used for calling a corresponding interface scene according to the current time of the system;
the second calling unit is used for calling corresponding personalized information and virtual images according to the historical behavior data of the user;
and the display unit is used for displaying the interface scene, the personalized message and the avatar on a display interface together.
Optionally, in the above apparatus, the first retrieving unit includes:
the generating unit is used for generating a calling tag corresponding to the current time of the system;
the matching unit is used for matching the material library labels corresponding to the calling labels from a plurality of preset material library labels;
the first invoking subunit invokes an interface scene corresponding to the material library label from the material library based on the matched material library label; and a plurality of interface scenes are preset in the material library.
Optionally, in the above apparatus, the generating unit includes:
the determining unit is used for determining the current season and the current period according to the current time of the system; wherein the time period includes morning, noon, afternoon, and evening;
and the generation subunit is used for setting the current season and the current period as a calling tag.
Optionally, in the above apparatus, the second retrieving unit includes:
the selection unit is used for selecting a personalized message which accords with the historical behavior of the user from the material library by analyzing the time-efficient data in the historical behavior data of the user;
the second calling subunit is used for calling the corresponding dynamic virtual image from the material library according to the keywords in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic avatar corresponds to at least one keyword.
Optionally, in the above apparatus, the selecting unit includes:
the analysis unit is used for obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and the selection subunit is used for selecting a personalized message conforming to the game type, the game time and the game result from the material library.
A third aspect of the present application provides a computer storage medium storing a program which, when executed, is adapted to carry out the avatar-based interaction method as claimed in any one of the above.
The application provides an interaction method and device based on an avatar, and a computer storage medium. The user is not required to operate to realize the switching of the interface scene and the virtual image, but the automatic switching of the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so that the types and the number of the interface scene and the virtual image are not limited by the number of operation instructions. And the personalized information can be automatically pushed to the user based on the historical behavior data of the user, so that the user can interact actively, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an interactive interface of a man-machine interaction mode based on an avatar in the prior art;
fig. 2 is a schematic flow chart of an interaction method based on an avatar according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an interaction method based on an avatar according to another embodiment of the present application;
fig. 4 is a flowchart illustrating an interaction method based on an avatar according to another embodiment of the present application;
fig. 5 is a flowchart illustrating an interaction method based on an avatar according to another embodiment of the present application;
fig. 6 is a flowchart illustrating an interaction method based on an avatar according to another embodiment of the present application;
fig. 7 is a schematic diagram of a display interface of an avatar-based interaction method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of an interactive device based on an avatar according to another embodiment of the present application;
FIG. 9 is a schematic diagram of a first fetch unit according to another embodiment of the present application;
fig. 10 is a schematic structural diagram of a second fetch unit according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the application provides an interaction method based on an avatar, which is shown in fig. 2 and comprises the following steps:
s201, acquiring current time of a system and historical behavior data of a user.
The current time of the system refers to the current actual time, i.e. the time used by the current user in the area, such as Beijing time. The current time of the system may include a specific year, month, and day, and is not limited to only a few hours, minutes, and seconds of the day. Alternatively, since almost all of the current device systems are in the system applications with their own time applications, the current time of the system can be obtained directly from the time applications with their own time applications.
Historical behavior data of the user refers to data generated by the user's behavior before the current time of the acquired system. Specifically, the game data generated by the game playing of the user, the motion data generated by the motion of the user, the geographic position change information of the user when the user goes out, and the like may be data which is generated by the behavior of the user and can be acquired, such as voice information generated by the speaking of the user.
Specifically, the third party application, such as a game, sports software, map software or a positioning system of the system, can be docked through a corresponding interface protocol, so that game data, sports data, geographic position information and the like of the user acquired by the third party application can be directly acquired from the third party application, or behavior data of the user can be acquired from the third party application through a request. The geographical position information of the user, the voice data of the user and the like can be obtained directly through the system software or hardware of the system, such as a positioning system, a microphone and the like. Of course, the current time of the system and the historical behavior data of the user can be obtained by other modes, which also belong to the protection scope of the application.
S202, according to the current time of the system, a corresponding interface scene is called, and according to the historical behavior data of the user, a corresponding personalized message and an avatar are called.
It should be noted that, in the implementation of the present application, a plurality of interface scenes, personalized messages, and avatars need to be designed in advance. The interface scene may also be understood as an interface background. Specifically, for different time periods, multiple interface scenarios are designed. The time period can comprise a plurality of time periods in one day and a plurality of time periods in one year, and a plurality of interface scenes can be designed in one time period, so that the selectable interface scenes are richer, and aesthetic fatigue of a user is avoided. And then, storing the designed interface scene in a material library. For example, an interface scene having a common characteristic is designed for winter, and two different interface scenes are specifically designed according to the day and night, respectively.
Similarly, for personalized messages, a plurality of personalized messages and a plurality of virtual images can be respectively configured according to different user behavior data. For example, for the running data of the user, personalized messages of "refuel, beyond yesterday's you", "today's pace is advanced, refuel" and the like are configured, and the avatar of the running athlete is configured. The avatar refers to an avatar such as an avatar of a person, an animal or a robot, and may be dynamic or static, planar or stereoscopic. The personalized message may be text or a voice corresponding to text. Also, the configured personalized message and avatar are stored in the material library. It should also be noted that the interface scenes, personalized messages and avatars in the material library may be updated continuously.
Therefore, when the current time of the system and the historical behavior data of the user are acquired, the interface scene which is designed in advance and corresponds to the acquired current time of the system can be called from the material library according to the current time of the system, and corresponding personalized information and virtual images can be called according to the historical behavior data of the user.
Optionally, in another embodiment of the present application, as shown in fig. 3, an implementation of invoking the corresponding interface scene according to the current time of the system in step S202 includes:
s301, generating a calling tag corresponding to the current time of the system.
In the embodiment of the application, after a plurality of interface scenes are designed for different times in advance, corresponding material library labels are marked on each interface scene according to the corresponding time, so that the mapping relation between the interface scene and the material library labels is formed, and the interface scenes and the corresponding material library labels are stored in the material library. The database label can be any single character or a combination of a plurality of characters which can be distinguished from each other, and usually adopts words or numbers as the database label. Different material library labels are adopted at different times. In addition, one interface scene can be corresponding to a single or a plurality of material library labels, for example, one interface scene can be corresponding to one material library label in the morning, and can also be simultaneously corresponding to three material library labels in winter, winter and morning. Similarly, a plurality of interface scenes can be corresponding to the same material library label.
Optionally, the call-up tag and the material library tag are both generated based on the same principle, so that the corresponding material library tag can be matched with the call-up tag more quickly. Of course, the generation of the call tag and the tag of the material library through two different principles is one of the modes, but the corresponding relation between the tags generated by the two different principles needs to be set and maintained, so that extra workload is increased, and the implementation process of the method is excessively complicated.
Alternatively, in another embodiment of the present application, a specific implementation manner of step S301, as shown in fig. 4, includes:
s401, determining a current season and a current period according to the current time of the system, wherein the period comprises morning, noon, afternoon and evening.
That is, in the embodiment of the present application, one year is divided into four seasons according to time, and one day is divided into four periods of morning, noon, afternoon, and evening. When the current time of the system is acquired, the current season of the system and the time of day are determined. Of course, this is just one alternative, and other time-slicing methods are adopted, for example, each season can be further divided into: early and late seasons, such as winter, are classified into early winter and late winter; and the period of the day may further include: morning, evening, etc. It is also possible to divide only one day and not divide the year, and these are all within the protection scope of the present application.
S402, setting the current season and the current period as the call tag.
That is, the present application directly sets the current season and the current period determined in step S401 as the recall tag. Because the call tag and the material library tag are generated based on the same principle, the material library tag also comprises seasons and time periods.
Specifically, after the current time of the system is obtained, the calling label and the material library label are called based on the same principle as that of the material library label, and the calling label corresponding to the current time of the system is generated. For example, when the current time of the obtained system is 11 months 20 # 20, the generated call label can be winter or night.
S302, matching a material library label corresponding to the calling label from a plurality of preset material library labels.
In the embodiment of the application, the interface scene corresponding to the current time of the system is retrieved in a label matching mode, so that the real world can be simulated through the corresponding interface scene, the interface scene is consistent with the real scene, and the experience of a user is improved.
Because the call label and the material library label are generated based on the same principle, the generated call label and the material library label in the material library can be compared one by one, so that the material library label corresponding to the call label is matched. The material library label corresponding to the call label can be understood as the same material library label as the call label.
S303, calling an interface scene corresponding to the material library label from the material library based on the matched material library label.
Specifically, based on the matched material library label, determining an interface scene with a corresponding relation with the matched material library label in the material library. The interface scene with the corresponding relation with the matched material library label at least comprises one.
Optionally, when a plurality of interface scenes are corresponding to the matched material library label, the interface scene with the least total number of previously fetched interface scenes in the plurality of interface scenes may be fetched according to the total number of previously fetched interface scenes. Therefore, the same interface scene can be prevented from being presented to the user to cause aesthetic fatigue, and a new interface scene updated to the material library can be quickly called. Of course, it is also possible to call out one interface scene from a plurality of interface scenes in a random or other manner.
Optionally, in another embodiment of the present application, in step S202, an implementation manner of retrieving the corresponding personalized message and avatar according to the historical behavior data of the user, as shown in fig. 5, includes:
s501, selecting a personalized message conforming to the historical behavior of the user from a material library by analyzing the time-efficient data in the historical behavior data of the user.
The time-efficient data refers to data which is closely related to time, namely, data with higher time-efficient. Data, which is generally less time-efficient, is generated by performing some relatively usual and frequently high-frequency behaviors without too much meaning for users, and no feedback of personalized messages is necessary for the historical behavior data. Therefore, the embodiment of the application only analyzes the time-efficient data in the historical behavior data of the user and selects the personalized message, thereby avoiding uninterrupted pushing of nonsensical messages to the user.
Alternatively, the behavior type of the user may be determined by analyzing the historical behavior data of the user, and specific data, such as running and the pace, heartbeat, distance, etc. of running, when the behavior is performed. And then, according to the behavior type of the user and specific data during the behavior, directly matching personalized information which accords with the historical behavior data of the user from a material library, or comparing the data of the last behavior of the same type, and according to the comparison result and the current behavior data, matching personalized information which accords with the comparison result and the historical behavior data of the user from the material library.
Optionally, in another embodiment of the present application, when the behavior data of the user is game data, a specific implementation manner of step S501, as shown in fig. 6, includes:
s601, obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user.
That is, when the acquired historical behavior data of the user includes the game data recorded by the previous game playing of the user, the type, the game time and the game result of the user in the current game playing are determined by analyzing the historical behavior data of the user. The game result may be a completed task or a gate, or a battle, etc. Of course, in addition to determining the type of the game played by the user and the game result, the total duration of the game can also be determined.
S602, selecting a personalized message which accords with the game type, the game time and the game result from the material library.
Specifically, a personalized message conforming to the game type and the game result can be selected from the material library according to the corresponding relation between the keyword of the personalized message and the game type and the game result. For example, the user has just played a hero league and has entered a game at 20 minutes, and may be based on the game type: hero alliance, game time: evening, game outcome: failure, selecting a personalized message as follows: the summons is not about to be cared, breakfast information is coupled, and the fuel continues to be added in the open world-! Need to help I stay at-!
S502, according to keywords in personalized messages, corresponding dynamic virtual images are called from a material library.
Wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library. Each dynamic avatar corresponds to at least one keyword.
That is, in the embodiment of the present application, the avatar is invoked based on a preset correspondence between keywords in the personalized message and the avatar. Because the effect that the personalized message and the avatar are finally presented to the user is that the avatar speaks the selected personalized message, in the embodiment of the application, the avatar adopts a dynamic avatar, and the dynamic avatar is called based on the keywords in the personalized message, so that the actions, the expressions and other forms of the called dynamic avatar are more in accordance with the semantic meaning of the called personalized message, and the effect presented to the user is more true.
Therefore, in the embodiment of the present application, when the personalized message and the dynamic avatar are preset, keywords and dynamic avatar in the personalized message need to be set. Wherein each dynamic avatar corresponds to at least one keyword.
Alternatively, one personalized message may contain multiple keywords, and different personalized messages may have the same keywords, so it may also be understood simply that: each dynamic avatar corresponds to at least one personalized message, and each personalized message corresponds to at least one dynamic avatar. When there are a plurality of keywords in the personalized message, the dynamic avatar to be invoked may be determined according to the degree of the total correlation of the dynamic avatar with all the keywords.
And S203, displaying the interface scene, the personalized message and the avatar on a display interface together.
Specifically, the retrieved interface scene, the personalized message and the dynamic avatar are combined together and displayed on the display interface. Wherein, the interface scene is used as the background of the display interface, and the virtual image is expressed as speaking personalized information.
Alternatively, as shown in fig. 7, the personalized message is displayed in the form of a conversation bubble. Specifically, the personalized message is displayed in a conversation bubble, and the conversation bubble is displayed at a position near the mouth of the avatar. It should be noted that, if the personalized message includes text and voice, the personalized message voice is played while the personalized message is displayed through the conversation bubble.
According to the interaction method based on the virtual image, the current time of the system and the historical behavior data of the user are obtained, then the corresponding interface scene is called according to the current time of the system, the corresponding personalized message and the virtual image are called according to the historical behavior data of the user, and finally the interface scene, the personalized message and the virtual image are displayed on a display interface together. The user is not required to operate to realize the switching of the interface scene and the virtual image, but the automatic switching of the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so that the types and the number of the interface scene and the virtual image are not limited by the number of operation instructions. And the personalized information can be automatically pushed to the user based on the historical behavior data of the user, so that the user can interact actively, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Another embodiment of the present application provides an avatar-based interactive apparatus, as shown in fig. 8, including:
an obtaining unit 801, configured to obtain current time of the system and historical behavior data of the user.
It should be noted that, the specific working process of the obtaining unit 801 may refer to the step S201 in the above method embodiment accordingly, and will not be described herein.
A first retrieving unit 802, configured to retrieve a corresponding interface scene according to the current time of the system.
It should be noted that, the specific working process of the first invoking unit 802 may refer to the step S202 in the above method embodiment accordingly, and will not be described herein again.
And a second retrieving unit 803, configured to retrieve the corresponding personalized message and avatar according to the historical behavior data of the user.
It should be noted that, the specific working process of the second retrieving unit 803 may also refer to the step S202 in the above-mentioned method embodiment accordingly, which is not described herein again.
And a display unit 804 for displaying the interface scene, the personalized message, and the avatar together on a display interface.
It should be noted that, the specific working process of the display unit 804 may refer to the step S203 in the above method embodiment accordingly, and will not be described herein again.
Optionally, in another embodiment of the present application, as shown in fig. 9, the first retrieving unit includes:
and the generating unit 901 is used for generating a calling tag corresponding to the current time of the system.
It should be noted that, the specific working process of the generating unit 901 may refer to the step S301 in the above method embodiment accordingly, which is not described herein again.
And the matching unit 902 is configured to match a material library label corresponding to the call label from a plurality of preset material library labels.
It should be noted that, the specific working process of the matching unit 902 may refer to the step S302 in the above method embodiment accordingly, and will not be described herein.
The first retrieving subunit 903 retrieves, from the material library, an interface scene corresponding to the material library tag based on the matched material library tag.
And a plurality of interface scenes are preset in the material library.
It should be noted that, the specific working process of the first invoking subunit 903 may refer to the step S303 in the above method embodiment accordingly, which is not described herein again.
Optionally, in another embodiment of the present application, the generating unit includes:
the determining unit is used for determining the current season and the current period according to the current time of the system; wherein the time period includes morning, noon, afternoon and evening.
It should be noted that, the specific working process of the determining unit may refer to step S401 in the above method embodiment accordingly, which is not described herein again.
And the generation subunit is used for setting the current season and the current period as a calling tag.
It should be noted that, the specific working process of the generating subunit may refer to step S402 in the above method embodiment accordingly, which is not described herein again.
Alternatively, in another embodiment of the present application, the second retrieving unit, as shown in fig. 10, includes:
and a selection unit 1001, configured to select a personalized message corresponding to the historical behavior of the user from a material library by analyzing the time-efficient data in the historical behavior data of the user.
It should be noted that, the specific working process of the selecting unit 1001 may refer to the step S501 in the above method embodiment accordingly, and will not be described herein.
And a second retrieving subunit 1002, configured to retrieve a corresponding dynamic avatar from the material library according to the keyword in the personalized message.
Wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic avatar corresponds to at least one keyword.
It should be noted that, the specific working process of the second invoking subunit 1002 may refer to the step S502 in the above method embodiment accordingly, and will not be described herein.
Optionally, in another embodiment of the present application, the selecting unit includes:
and the analysis unit is used for obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user.
It should be noted that, the specific working process of the analysis unit may refer to the step S601 in the above method embodiment accordingly, which is not described herein again.
And the selection subunit is used for selecting a personalized message conforming to the game type, the game time and the game result from the material library.
It should be noted that, the specific working process of the selecting subunit may refer to step S602 in the above method embodiment accordingly, which is not described herein again.
According to the interactive device based on the virtual image, the current time of the system and the historical behavior data of the user are obtained through the obtaining unit, then the first retrieving unit retrieves the corresponding interface scene according to the current time of the system, the second retrieving unit retrieves the corresponding personalized message and the virtual image according to the historical behavior data of the user, and finally the display unit displays the interface scene, the personalized message and the virtual image on the display interface together. The user is not required to operate to realize the switching of the interface scene and the virtual image, but the automatic switching of the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so that the types and the number of the interface scene and the virtual image are not limited by the number of operation instructions. And the personalized information can be automatically pushed to the user based on the historical behavior data of the user, so that the user can interact actively, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Another embodiment of the present application provides a computer storage medium storing a program for implementing the avatar-based interaction method as set forth in any one of the above method embodiments when the program is executed.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An avatar-based interactive method, comprising:
acquiring the current time of a system and historical behavior data of a user;
generating a calling tag corresponding to the current time of the system;
matching a material library label corresponding to the calling label from a plurality of preset material library labels;
based on the matched material library label, calling an interface scene corresponding to the material library label from the material library; when a plurality of interface scenes are corresponding to the material library label, according to the total number of times that each interface scene is previously called, the interface scene with the minimum total number of times is called; a plurality of interface scenes are preset in the material library;
selecting a personalized message conforming to the historical behavior of the user from a material library by analyzing the time-efficient data in the historical behavior data of the user;
according to the keywords in the personalized message, a corresponding dynamic virtual image is called from the material library; wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic avatar corresponds to at least one keyword;
displaying the interface scene, the personalized message and the avatar together on a display interface; wherein the interface scene is used as a background of the display interface, and the avatar is expressed as speaking the personalized message.
2. The method of claim 1, wherein generating the call tag corresponding to the current time of the system comprises:
determining a current season and a current period according to the current time of the system; wherein the time period includes morning, noon, afternoon, and evening;
and setting the current season and the current period as a calling tag.
3. The method of claim 1, wherein selecting a personalized message from a library of stories by analyzing recent time-lapse data in the user's historical behavioral data comprises:
obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and selecting a personalized message which accords with the game type, the game time and the game result from the material library.
4. An avatar-based interactive apparatus, comprising:
the acquisition unit is used for acquiring the current time of the system and the historical behavior data of the user;
the first calling unit is used for calling a corresponding interface scene according to the current time of the system;
the second calling unit is used for calling corresponding personalized information and virtual images according to the historical behavior data of the user;
the display unit is used for displaying the interface scene, the personalized message and the virtual image on a display interface together; wherein the interface scene is used as a background of the display interface, and the avatar is expressed as speaking the personalized message;
the first calling unit comprises:
the generating unit is used for generating a calling tag corresponding to the current time of the system;
the matching unit is used for matching the material library labels corresponding to the calling labels from a plurality of preset material library labels;
the first invoking subunit invokes an interface scene corresponding to the material library label from the material library based on the matched material library label; when a plurality of interface scenes are corresponding to the material library label, according to the total number of times that each interface scene is previously called, the interface scene with the minimum total number of times is called; a plurality of interface scenes are preset in the material library;
the second calling unit comprises:
the selection unit is used for selecting a personalized message which accords with the historical behavior of the user from the material library by analyzing the time-efficient data in the historical behavior data of the user;
the second calling subunit is used for calling the corresponding dynamic virtual image from the material library according to the keywords in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic avatar corresponds to at least one keyword.
5. The apparatus of claim 4, wherein the generating unit comprises:
the determining unit is used for determining the current season and the current period according to the current time of the system; wherein the time period includes morning, noon, afternoon, and evening;
and the generation subunit is used for setting the current season and the current period as a calling tag.
6. The apparatus according to claim 4, wherein the selecting unit includes:
the analysis unit is used for obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and the selection subunit is used for selecting a personalized message conforming to the game type, the game time and the game result from the material library.
7. A computer storage medium storing a program which, when executed, is adapted to carry out the avatar-based interaction method as claimed in any one of claims 1 to 3.
CN201911046918.9A 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium Active CN110812843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046918.9A CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046918.9A CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Publications (2)

Publication Number Publication Date
CN110812843A CN110812843A (en) 2020-02-21
CN110812843B true CN110812843B (en) 2023-09-15

Family

ID=69551553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046918.9A Active CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Country Status (1)

Country Link
CN (1) CN110812843B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034989A (en) * 2020-09-04 2020-12-04 华人运通(上海)云计算科技有限公司 Intelligent interaction system
CN112115231A (en) * 2020-09-17 2020-12-22 中国传媒大学 Data processing method and device
CN112988022B (en) * 2021-04-22 2021-08-13 北京航天驭星科技有限公司 Virtual calendar display method and device, electronic equipment and computer readable medium
CN114363302A (en) * 2021-12-14 2022-04-15 北京云端智度科技有限公司 Method for improving streaming media transmission quality by using layering technology
CN114816625B (en) * 2022-04-08 2023-06-16 郑州铁路职业技术学院 Automatic interaction system interface design method and device
CN115841354B (en) * 2022-12-27 2023-09-12 华北电力大学 Electric vehicle charging pile maintenance evaluation method and system based on block chain
CN116627261A (en) * 2023-07-25 2023-08-22 安徽淘云科技股份有限公司 Interaction method, device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639725A (en) * 2013-11-08 2015-05-20 腾讯科技(深圳)有限公司 Interface switching method and device
CN106502705A (en) * 2016-11-04 2017-03-15 乐视控股(北京)有限公司 Method and its device of application program theme are set
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7177915B2 (en) * 2002-12-31 2007-02-13 Kurt Kopchik Method and apparatus for wirelessly establishing user preference settings on a computer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639725A (en) * 2013-11-08 2015-05-20 腾讯科技(深圳)有限公司 Interface switching method and device
CN106502705A (en) * 2016-11-04 2017-03-15 乐视控股(北京)有限公司 Method and its device of application program theme are set
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN110812843A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110812843B (en) Interactive method and device based on virtual image and computer storage medium
US11099867B2 (en) Virtual assistant focused user interfaces
US20170277993A1 (en) Virtual assistant escalation
US20190057298A1 (en) Mapping actions and objects to tasks
CN103135969B (en) Operation, generation, the method started and its device of application program
CN1312554C (en) Proactive user interface
US11347801B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
CN107704169B (en) Virtual human state management method and system
US11200893B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
CN107391522A (en) Optional application link is incorporated into message exchange topic
WO2022047214A2 (en) Digital assistant control of applications
US20220301454A1 (en) Language Fluency System
CN109996026B (en) Video special effect interaction method, device, equipment and medium based on wearable equipment
JP2023520483A (en) SEARCH CONTENT DISPLAY METHOD, DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
CN109146540A (en) Monitoring method, mobile device and the server of the visible exposure of advertisement
US20170337284A1 (en) Determining and using attributes of message exchange thread participants
DE102023102142A1 (en) CONVERSATIONAL AI PLATFORM WITH EXTRAACTIVE QUESTION ANSWER
CN108052506B (en) Natural language processing method, device, storage medium and electronic equipment
CN117319340A (en) Voice message playing method, device, terminal and storage medium
US11726656B2 (en) Intelligent keyboard
CN113362802A (en) Voice generation method and device and electronic equipment
CN113014994A (en) Multimedia playing control method and device, storage medium and electronic equipment
CN109726267A (en) Story recommended method and device for Story machine
CN107168978B (en) Message display method and device
CN113297414B (en) Music gift management method and device, medium and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022445

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant