CN111784271A - User guiding method, device, equipment and storage medium based on virtual object - Google Patents

User guiding method, device, equipment and storage medium based on virtual object Download PDF

Info

Publication number
CN111784271A
CN111784271A CN201910272789.9A CN201910272789A CN111784271A CN 111784271 A CN111784271 A CN 111784271A CN 201910272789 A CN201910272789 A CN 201910272789A CN 111784271 A CN111784271 A CN 111784271A
Authority
CN
China
Prior art keywords
user
information
interaction
virtual object
operation behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910272789.9A
Other languages
Chinese (zh)
Other versions
CN111784271B (en
Inventor
李观鲁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910272789.9A priority Critical patent/CN111784271B/en
Publication of CN111784271A publication Critical patent/CN111784271A/en
Application granted granted Critical
Publication of CN111784271B publication Critical patent/CN111784271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting

Abstract

The embodiment of the application provides a user guiding method, a device, equipment and a storage medium based on a virtual object, wherein the method comprises the following steps: acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program; acquiring first interaction information corresponding to the operation behavior information according to the operation behavior information; the first interactive information is provided to the user through the virtual object of the user to guide the user through the first interactive information. According to the scheme provided by the embodiment of the application, interaction with the application program user, namely the user, can be realized on the basis of the corresponding interaction information acquired according to the operation behavior information of the user on the application program through the virtual object provided in the application program, so that the user can be guided on the basis of the interaction information, and good application program use habits can be developed by the user.

Description

User guiding method, device, equipment and storage medium based on virtual object
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for guiding a user based on a virtual object.
Background
With the rapid development of technology and the popularization of electronic devices, various application programs have become an indispensable part of people's lives. However, sometimes people (for example, children) are addicted to the electronic equipment due to lack of self-control, which may lead to poor living habits and even cause problems of visual deterioration of users in the past, and may seriously affect the daily lives of the users.
Disclosure of Invention
The purpose of the embodiments of the present application is to solve at least one of the above technical drawbacks, and in particular, the technical drawback that the existing application program cannot guide the user behavior. The technical scheme provided by the embodiment of the application is as follows:
in one aspect, an embodiment of the present application provides a user guidance method based on a virtual object, where the method includes:
acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program;
acquiring first interaction information corresponding to the operation behavior information according to the operation behavior information;
the first interactive information is provided to the user through the virtual object of the user to guide the user through the first interactive information.
In an alternative embodiment, the operational behavior information includes user usage information for a specified function within the application.
In an alternative embodiment, the application comprises a video-class application and the specified function comprises a video play function.
In an alternative embodiment, the operation behavior information includes at least one of a video playing time length and a video type of a played video.
In an optional embodiment, obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
In an alternative embodiment, the method further comprises:
and updating the attribute information of the virtual object of the user according to the operation behavior information.
In an alternative embodiment, if the application includes a video application and the operation behavior information includes a video type of a video to be played, updating the attribute information of the virtual object of the user according to the operation behavior information, including:
and updating the attribute information of the virtual object of the user according to the video type and the attribute updating factor coefficient corresponding to the video type.
In an alternative embodiment, the method further comprises:
and acquiring and storing the mapping relation between each video type and the attribute update factor coefficient corresponding to each video type.
In an alternative embodiment, the attribute information includes at least one of the following information:
growth value, level, status, skill.
In an alternative embodiment, the method further comprises:
when the attribute information meets a preset second interaction triggering condition, acquiring second interaction information corresponding to the attribute information;
and providing the second interaction information to the user through the virtual object of the user.
In an optional embodiment, obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
searching corresponding first interaction information in a pre-configured interaction information database according to the operation behavior information; alternatively, the first and second electrodes may be,
and generating first interaction information according to the operation behavior information.
In an optional embodiment, the operation behavior information includes a user operation instruction, and the obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
and analyzing the user operation instruction, and generating first interaction information corresponding to the user operation instruction according to an analysis result of the user operation instruction.
In an alternative embodiment, the first interaction information is provided to the user through a virtual object of the user:
providing the first interaction information to the user through the virtual object of the user based on at least one of the following ways:
animation, voice, text.
In an alternative embodiment, the default display state of the virtual object is a hidden state, and the method further comprises:
and when the preset display condition is met, controlling the virtual object of the user to be displayed on the display interface of the application program.
In an alternative embodiment, the display condition includes when interacting with the user through the user's virtual object.
In an alternative embodiment, the virtual object comprises a virtual pet.
In an alternative embodiment, the method further comprises:
receiving a setting operation of a user on object information of a virtual object of the user;
and updating the object information according to the setting operation.
In another aspect, an embodiment of the present application provides a virtual object-based user guidance apparatus, where the apparatus includes:
the behavior information acquisition module is used for acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program;
the interactive information acquisition module is used for acquiring first interactive information corresponding to the operation behavior information according to the operation behavior information;
and the interaction module is used for providing the first interaction information for the user through the virtual object of the user so as to guide the user through the first interaction information.
In an alternative embodiment, the operational behavior information includes user usage information for a specified function within the application.
In an alternative embodiment, the application comprises a video-class application and the specified function comprises a video play function.
In an alternative embodiment, the operation behavior information includes at least one of a video playing time length and a video type of a played video.
In an optional embodiment, when the interaction information obtaining module obtains the first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
In an alternative embodiment, the apparatus further comprises:
and the attribute information updating module is used for updating the attribute information of the virtual object of the user according to the operation behavior information.
In an optional embodiment, if the application includes a video application and the operation behavior information includes a video type of a video to be played, the attribute information updating module is specifically configured to, when updating the attribute information of the virtual object of the user according to the operation behavior information:
and updating the attribute information of the virtual object of the user according to the video type and the attribute updating factor coefficient corresponding to the video type.
In an alternative embodiment, the attribute information includes at least one of the following information:
growth value, level, status, skill.
In an optional embodiment, the interaction information obtaining module is further configured to:
when the attribute information meets a preset second interaction triggering condition, acquiring second interaction information corresponding to the attribute information;
and providing the second interaction information to the user through the virtual object of the user.
In an optional embodiment, when the interaction information obtaining module obtains the first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
searching corresponding first interaction information in a pre-configured interaction information database according to the operation behavior information; alternatively, the first and second electrodes may be,
and generating first interaction information according to the operation behavior information.
In an optional embodiment, the operation behavior information includes a user operation instruction, and when the interaction information obtaining module obtains the first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
and analyzing the user operation instruction, and generating first interaction information corresponding to the user operation instruction according to an analysis result of the user operation instruction.
In an optional embodiment, when the interaction module provides the first interaction information to the user through the virtual object of the user, the interaction module is specifically configured to:
providing the first interaction information to the user through the virtual object of the user based on at least one of the following ways:
animation, voice, text.
In an alternative embodiment, the default display state of the virtual object is a hidden state, and the apparatus further comprises:
and the virtual object display control module is used for controlling the virtual object of the user to be displayed on the display interface of the application program when the preset display condition is met.
In an alternative embodiment, the display condition includes when interacting with the user through the user's virtual object.
In an alternative embodiment, the virtual object comprises a virtual pet.
In an optional embodiment, the apparatus further comprises an object information setting module, configured to:
receiving a setting operation of a user on object information of a virtual object of the user;
and updating the object information according to the setting operation.
In an alternative embodiment, the apparatus further comprises:
and the video information acquisition module is used for acquiring and storing the mapping relation between each video type and the attribute update factor coefficient corresponding to each video type.
In another aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor;
a memory configured to store operating instructions;
and the processor is used for calling the operation instruction to execute the method shown in any optional embodiment of the application.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method shown in any optional embodiment of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects: according to the scheme provided by the embodiment of the application, the corresponding interaction information is obtained according to the operation behavior information of the user to the application program, the interaction information can be provided for the user through the virtual object provided in the application program, the interaction with the application program user (namely the user) is realized, and the user can be guided based on the interaction information. Because the interactive information corresponds to the use information of the user to the application program, the behavior of the user can be correspondingly guided according to the actual use condition of the user to the application program, and the user (such as children) can be better guided to develop good use habits of the application program.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a diagram illustrating a system architecture to which embodiments of the present application are applicable;
fig. 2 is a flowchart illustrating a virtual object-based user guidance method provided in an embodiment of the present application;
FIG. 3 is a flow chart illustrating a manner of procuring a virtual pet according to an example of the present application;
4a, 4b and 4c are schematic diagrams illustrating three pet adoption interfaces in an example of the present application;
FIG. 4d is a schematic diagram of a user interface of an example of the present application when a virtual object interacts with a user;
FIG. 5 illustrates a flow chart of a method for guiding user behavior based on a virtual pet provided in an example of the present application;
FIG. 6 illustrates a schematic diagram of an implementation of interaction with a user through a virtual pet provided in an example of the present application;
fig. 7 is a schematic structural diagram illustrating a virtual object-based user guidance apparatus according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
With the development of computer technology, the diversification of terminal functions and the improvement of living standard of people, various terminal devices become an indispensable part of the life of people, and users of the terminal devices can meet different living and entertainment requirements through various application programs on the devices. For example, the user can watch various videos through a video-type application of the terminal device, play various music through a music-type application, play games through a game-type application, and the like. However, in practical applications, some users, especially children, are often enthusiastic to watch videos or other videos or play games and other terminal applications due to lack of self-control, parents may not notice the situation because of busy work, and the eyesight of children is easily affected for a long time.
Although there is a scheme for controlling the video watching time length of a child through parental setting at present, for example, after a parent logs in a video client and passes parental authentication, the maximum time length that the child can continuously watch a video at a single time can be set, and when the child continuously watches the video at a single time and exceeds a preset value, the application can remind the parent. According to the scheme, the longest video time that the child can watch once is set by the parents, the purpose of reducing the watching time of the child is achieved, however, the current limitation is that the child cannot be forced to continuously watch the video or the child can watch the video after the parents agree, unlocking is completed through simple four operations, once the child grasps the method for removing the watching limitation, the problem that the watching time is nearly meaningless is set, and the purpose that the parents hope to reduce the watching time of the child cannot be achieved.
In view of the foregoing problems in the prior art, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for guiding a user based on a virtual object, and based on the scheme provided by the embodiments of the present application, the method, the apparatus, the device, and the storage medium can effectively guide and help the user (such as a child) to form a good use habit of a terminal device.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
First, a few technical terms related to the embodiments of the present application will be briefly described.
Virtual object: refers to a virtual program that exists inside an application program and that can interact with the user of the application (i.e., the user using the application program) through animation, voice, or other means. The form of the virtual object is not limited in the embodiments of the present application, for example, the virtual object may be a virtual pet, a virtual cartoon character, an animation character, or the like. In practical application, the styles of various virtual objects can be configured, and an application user can configure and select the style of the virtual object according to the actual requirements of the user. In addition, the virtual object may also be classified into different classes, for example, the initial class is 1 class, and the class promotion may be completed by obtaining the growth value, and some attribute information of the virtual object in different classes may also be different, for example, the appearance and the voice of the virtual object in different classes are different, and different behaviors of the user in the application program may affect the growth value of the virtual object. The virtual object may be two-dimensional or three-dimensional, for example, the virtual object may be a three-dimensional virtual pet represented in the form of a pet avatar in the form of a cartoon cat.
Forward excitation video: the method is determined by scientific research and data statistics and is beneficial to young people to form related videos with knowledge accumulation, such as scientific knowledge videos, traditional culture videos and the like.
Fig. 1 is a schematic diagram of a system architecture used in the solution provided by the embodiment of the present application, and as shown in the diagram, the system architecture mainly includes a user terminal device 10 and a server 20, and the user terminal device 10 and the server 20 communicate with each other through a network. The user terminal device 10 may specifically include, but is not limited to, a smart phone, a Personal Computer (PC), a PAD (portable android device tablet), and the like shown in the figure. The server 20 may include, but is not limited to, at least one of a physical server and a cloud server, and may be a server cluster including a plurality of servers (e.g., a plurality of physical servers, a plurality of cloud servers, or a plurality of cloud servers and servers). It is to be understood that the type and number of user terminal devices 10 and the number of servers 20 shown in fig. 1 are only illustrative.
Corresponding to the network architecture shown in fig. 1, a user may download and install an application program provided with a virtual object adoption function on his user terminal device 10 (e.g., the user's smart phone), after the installation of the application is completed, the user can, during the first opening or use of the application, a virtual object adoption request may be initiated, the user terminal device 10 based on the adoption request initiated by the user, data of virtual objects that can be accepted by the user can be obtained from the server 20 and provided to the user through interaction with the server 20, and after the user finishes accepting the virtual objects, the user terminal device 10 may obtain corresponding interaction information based on the operation behavior information of the application program by the user, and the interactive information is provided for the user through the virtual object accepted by the user, so that the purpose of actively guiding the user based on the interactive information is realized.
The embodiments of the present application will be further described with reference to the following specific embodiments.
Fig. 2 shows a flowchart of a virtual object-based user guidance method provided in an embodiment of the present application, and as shown in fig. 2, the method may be executed by a user terminal device, for example, by a smartphone shown in fig. 1, and the method mainly includes the following steps:
step S110: acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program;
the operation behavior information may also refer to operation behavior information of the user in a specified time period or a certain period, for example, operation behavior information of the user to the application every day.
It can be understood that, in practical applications, what operation behavior information needs to be obtained specifically can be configured according to practical application requirements. In addition, the user in the embodiment of the present application may include, but is not limited to, a child.
Step S120: acquiring first interaction information corresponding to the operation behavior information according to the operation behavior information;
step S130: the first interactive information is provided to the user through the virtual object of the user to guide the user through the first interactive information.
As can be seen from the foregoing description, the virtual object may be a virtual pet, a virtual cartoon character, an animated character, etc.
In an optional embodiment of the present application, providing the first interaction information to the user through the virtual object of the user may specifically include:
providing the first interaction information to the user through the virtual object of the user based on at least one of the following ways:
animation, voice, text.
For convenience of description, in some descriptions of the embodiments of the present application below, a virtual pet may be taken as an example to refer to a virtual object.
According to the scheme provided by the embodiment of the application, the virtual object is provided in the application program, and the interaction with the application program user (namely the user) can be realized through the virtual object based on the corresponding interaction information acquired according to the operation behavior information of the user on the application program, so that the user can be guided based on the interaction information. Because the interactive information corresponds to the use information of the user to the application program, the behavior of the user can be correspondingly guided according to the actual use condition of the user to the application program, and the user (such as children) can be better guided to develop good use habits of the application program.
In an optional embodiment of the present application, when a user logs in and uses an application (for example, when the application is opened for use for the first time, or when the application is opened for use again, or during the use), the user can complete a holding action for a virtual object. As an alternative, fig. 3 is a schematic flowchart illustrating a manner of adopting a virtual object according to an embodiment of the present application, and as shown in fig. 3, the manner may include two major steps, that is, step S10 and step S20 shown in the figure, and the two steps are described as follows:
s10, entering a pet getting interface;
specifically, in this step, the user clicks a virtual pet reception entry (e.g., a virtual reception button) at the application client to enter the pet reception interface, the entry may be suspended from the application host interface or embedded in the personal data page, the pet reception interface may have a plurality of virtual pets for the user to select, and the virtual pets may be displayed in a list form. Of course, when the user clicks the getting-in button, it may be determined whether the user has logged in, that is, the step of determining whether the user has logged in shown in the figure, and if the user has not logged in, the login interface may be pulled up, and the user enters the pet getting-in interface after completing the login. And after receiving the pet getting operation of the user, the client pulls the virtual pet data which can be currently accepted from the server, the getting interface is displayed after the data pulling is successful, otherwise, the client prompts that the data pulling is failed, and the getting process is ended.
As an example, an interface schematic of a pet adoption interface is shown in FIG. 4 a. In this example, three optional virtual pets are shown in the pet getting interface, the image part of the pet shown in the figure can be specifically displayed as the pet style, as can be seen from the figure, a pet background introduction is also provided in the pet getting interface in this example, and the user can know the relevant information of each pet object by clicking on the introduction. The nickname of each virtual pet shown in the figure can be a default configured nickname or a default state (for example, two characters of the nickname are displayed), and after a user takes a certain virtual pet, the user can set or modify the nickname of the pet, or of course, the user can choose to adopt the default configured nickname without modifying the nickname.
S20, a user gets a virtual pet;
in this step, after the user clicks the avatar of the virtual pet, the virtual pet may send out a pre-designed voice for calling, different voices of the pet may be different, and at the same time, the animation of the virtual pet may be displayed, as shown in fig. 4b, it may be understood that only one animation is shown in fig. 4a and fig. 4b in this example) after the display is completed, a dialog box may be popped up to let the user determine whether to accept the pet, the user completes the process of accepting after determining, otherwise, the user continues to stay in the current interface, or the user may return to the previous user interface when it is not determined whether to accept the pet and chooses to think again, as shown in fig. 4 a.
Specifically, after the user selects a virtual pet (step S21 for getting the pet shown in the figure), the client may display the virtual pet through animation and voice, and may also prompt the user whether to confirm getting the pet, that is, the user confirmation step S22 shown in the figure, if the user cancels getting the pet (for example, the user clicks the "think again" virtual button shown in fig. 4 b) or does not receive the operation of confirming getting the pet within a certain time period, the client may jump back to the pet getting interface, if the operation of confirming getting the virtual pet by the user is received, the client sends a corresponding getting synchronization request to the server to synchronize getting data at the server, so that the server knows the information of the pet currently being taken by the user, if the data synchronization is successful (for example, a feedback of successful getting the pet returned by the server is received), the client prompts the user that the pet gets the pet successfully, a pre-configured successful pet reception animation can be played (step S23 shown in the figure), if the data synchronization is not successful (e.g., no successful reception feedback of pet reception returned by the server or no successful reception feedback of pet reception fed back by the service area is received), the step is restarted, as shown in step S23 shown in the figure, the user can be prompted to retry the re-reception, at this time, if an operation of determining to retry by the user is received, the pet reception interface can be returned to, pet reception is restarted, if an operation of canceling the retry by the user is received or an operation of determining to retry by the user is not received within a certain time, the pet reception process is ended, and at this time, the pet reception fails.
After the virtual pet is accepted, the virtual pet can receive the operation behavior information of the user, and different interactions with the user can be completed according to different received behavior information. In addition, when the user opens the application for the first time every day, the virtual pet may interact with the user through animation, voice or other preconfigured modes, and the virtual pet may disappear or be displayed at a designated position of the user interface after the interaction is completed, as shown in fig. 4c, after the interaction with the user is completed through the virtual pet based on the preconfigured interaction text, the virtual pet may be controlled to retreat from the field according to a preset trajectory, as shown in fig. 4c, the virtual pet disappears from the upper right corner.
In an optional embodiment of the present application, the user guidance method may further include:
receiving a setting operation of a user on object information of a virtual object of the user;
and updating the object information according to the setting operation.
After the user finishes the adoption of the virtual object, after logging in the application program, the user can perform corresponding object information setting operation on the virtual object according to needs, wherein the user can specifically set which object information can be configured according to actual needs, for example, the object information can include but is not limited to an object nickname, the shape of the object (such as clothes), the interaction mode of the object and the user (such as animation or voice and the like), the interaction time of the object and the user (such as the interaction with the user when the application program is opened for the first time every day or the application program is logged in again every time) and the like.
In an optional embodiment of the present application, in the step S120, obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
In an alternative embodiment of the present application, the operation behavior information includes usage information of a specified function within the application program by the user.
In practical application, in order to achieve guidance of user behaviors more specifically, the scheme of the embodiment of the application can interact with the user through the virtual object based on the use information of the user on the specified function in the application program. The designated function can be configured according to the type of the application program, the actual application requirement and the like.
In an alternative embodiment of the present application, the application includes a video application, and the specified function includes a video playing function.
For some current video application programs, especially children, a user is indulged in watching animation or other videos due to lack of self-control power, and in order to achieve good guidance for the user and avoid transition of the user to watching videos, the scheme provided by the embodiment of the application can achieve guidance for video watching behaviors of the user by obtaining the use information of the user on the video playing function and based on the interactive information corresponding to the use information of the actual video playing function of the user, so that the user can develop good video watching habits.
In an optional embodiment of the present application, the operation behavior information includes at least one of a video playing time length and a video type of a played video.
It should be noted that, according to the actual application requirements, the video playing time length and the video type, which is specifically the video playing time length and the video type in what situation, may be configured. For example, the video playing time length may refer to an accumulated time length for a user to watch a video within a certain time length (e.g., each day) and a video type of the video played within the time length, or may refer to a corresponding accumulated video playing time length and a corresponding video type after the user logs in or reopens the application every time.
In addition, the video types can be divided according to actual requirements, and the division mode of the video types is not limited in the embodiment of the application. When a user installs an application on the terminal device for the first time, the configuration file of the application program installation package can include video classification information, and after installation, the video classification information is stored locally. Of course, when there is an update of the video classification information at the server side, the server may actively send the updated video classification information to the terminal device, so that the terminal device updates the video classification information.
According to the scheme of the embodiment of the application, the corresponding interaction information can be obtained according to one or two items of information such as the video playing time length and the video type of the user, and the user can interact with the virtual pet based on the information so as to remind and guide the user.
In an optional embodiment of the present application, obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information may include:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
Wherein, the first interaction triggering condition can be configured according to the requirement. In practical application, there may be a plurality of first interaction triggering conditions, and when the operation behavior information of the application program by the user satisfies any one of the triggering conditions, the first interaction information corresponding to the current operation behavior information may be acquired, and at this time, the first interaction information may also be understood as the first interaction information corresponding to the interaction triggering condition satisfied by the current operation behavior information.
In an optional embodiment of the present application, the user guidance method may further include:
and updating the attribute information of the virtual object of the user according to the operation behavior information.
The attribute information of the virtual object may include, but is not limited to, at least one of growth value, level, status, and skill of the virtual object.
The growth value may also be referred to as a life value of the virtual object, and may reflect the growth of the virtual object, for example, the growth value may be obtained to raise the virtual level, and generally, the growth value is larger and the virtual object is higher in level. The specific representation form of the state (also referred to as form) of the virtual object may be configured as required, for example, the current state may be represented by the shape of the virtual object, for example, in the case of a virtual pet, the state of happiness and discomfort of the virtual pet may be represented by the shape of the virtual pet. The skill of the virtual pet may include, but is not limited to, various functions that the virtual pet has, such as the higher the skill, the more functions it may have. Generally, the growth value of a virtual object is in a positive proportion to the level and skill of the virtual object, that is, generally, the larger the growth value, the higher the level, and the more skill.
According to the scheme of the embodiment of the application, the attribute information of the virtual object can be controlled according to the operation behavior information of the user on the application program, so that the effect of prompting the user can be achieved based on the change of the attribute information of the virtual object.
In an optional embodiment of the present application, if the application includes a video application and the operation behavior information includes a video type of a video to be played, updating the attribute information of the virtual object of the user according to the operation behavior information, which may include:
and updating the attribute information of the virtual object of the user according to the video type and the attribute updating factor coefficient corresponding to the video type.
Because different types, namely types of videos have different influences on users, different influence factor coefficients can be configured for the different types of videos, and the influence factor coefficients are used as attribute update factor coefficients of the virtual objects, so that the types of videos watched by the users can bring different gains to the update of the attribute information (such as growth values of the virtual objects) of the virtual objects, and the growth of the virtual objects can be accelerated by watching forward-excited videos such as scientific knowledge videos or traditional culture videos. By the scheme, the type of the video watched by the user can be associated with the attribute information of the virtual object, so that the type of the video watched by the user can influence the attribute of the virtual object, and the type of the video watched by the user can be guided to a certain extent through the change of the attribute information of the virtual object.
The specific implementation manner of the user terminal device communicating with the server to obtain and store the mapping relationship between each video type and the attribute update factor coefficient corresponding to each video type may refer to the manner of obtaining video classification information by communicating with the server described in the foregoing, and details are not repeated here.
In an optional embodiment of the present application, the user guidance method may further include:
when the attribute information meets a preset second interaction triggering condition, acquiring second interaction information corresponding to the attribute information;
and providing the second interaction information to the user through the virtual object of the user.
According to the scheme of the embodiment of the application, when the attribute information of the virtual object of the user meets a certain condition, the virtual object can be triggered to interact with the user, so that the behavior of the user is guided through the interaction information. For example, as an optional mode, the triggering condition may include, but is not limited to, that the duration of continuous use of the application program by the user exceeds a set duration, specifically, for a video application program, if the duration of watching the video by the user exceeds a limit duration, the virtual object may be triggered to interact with the user, and for example, the user may be prompted by animation, voice, and the like to "the user has watched for a long time today, rather than having a break in a bar-".
In an optional embodiment of the application, in the step S120, obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information may specifically include:
searching corresponding first interaction information in a pre-configured interaction information database according to the operation behavior information; alternatively, the first and second electrodes may be,
and generating first interaction information according to the operation behavior information.
Similarly, in an optional embodiment of the present application, when the attribute information satisfies a preset second interaction trigger condition and second interaction information corresponding to the attribute information is obtained, the corresponding second interaction information may be searched from a preconfigured interaction information database according to the satisfied interaction trigger condition, or the corresponding second interaction information may be generated according to the satisfied interaction trigger condition.
According to the scheme provided by the embodiment of the application, when interaction with a user is performed through a virtual object based on interaction information (first interaction information or second interaction information), the virtual object can interact with the user through a preset language or animation and the like, for example, when the operation behavior information meets a first interaction trigger condition, the first interaction information corresponding to the trigger condition can be searched from an interaction information database according to the met trigger condition, and interaction with the user is performed through a virtual pet in a language, animation and the like based on the interaction information.
In an optional embodiment of the present application, the operation behavior information includes a user operation instruction, and the obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information may include:
and analyzing the user operation instruction, and generating first interaction information corresponding to the user operation instruction according to an analysis result of the user operation instruction.
As an alternative, in order to achieve that the virtual object can really "communicate" with the user, when receiving the operation instruction of the user, the interaction information corresponding to the user operation instruction may be generated based on the analysis result by analyzing the operation instruction of the user. The user operation instruction includes, but is not limited to, a voice instruction of a user, a touch operation instruction on an application program interface, and the like. As an example, for example, a smart voice function can be accessed in an application program, based on the function, a voice instruction of a user can be received and recognized, and based on the recognition result of the voice instruction of the user, the interaction with the user is realized, and the use perception of the user is improved. The user operation instruction may include, but is not limited to, an operation instruction of a user for the virtual object, and for example, when the obtained user operation instruction is an operation instruction meeting a preset condition, some interactions may be performed with the user through the virtual object, so as to improve the use experience of the application.
In an optional embodiment of the present application, a default display state of the virtual object is a hidden state, and the method may further include:
and when the preset display condition is met, controlling the virtual object of the user to be displayed on the display interface of the application program.
The display condition includes, but is not limited to, when the user is interacted with through the virtual object of the user.
In practical applications, in order to avoid the influence of the virtual object being always displayed on the display interface of the application program on the user, the configuration may be such that the virtual object is displayed on the user interface only when a preset display condition is met, and the virtual object is in a hidden state under other conditions.
The appearance scene of the virtual object, that is, the display condition may be configured according to actual requirements, for example, when the user needs to interact with the user through the virtual object based on the first interaction information or the second interaction information, the virtual object may be controlled to be displayed on the user interface according to a preset track, the virtual object may be displayed on the user interface each time the user logs in or reopens the application program, at this time, the user may also simply interact with the virtual object, for example, the user may be called through the virtual object, the current virtual object may appear only after the duration of continuous use of the application program by the user (if the duration of continuous viewing of the video) reaches a certain limit value, or the user may accumulate a certain number of days after logging in the continuous application program, or after the user finishes viewing a certain series of videos (for video-type application programs), the virtual object is displayed.
As an example, when interaction with a user through a virtual object is required, the virtual object may be controlled to be displayed on an application user interface in a preset manner, as shown in fig. 4d, when interaction with the user through the virtual object is required, the virtual object may be controlled to enter from the upper right corner of the user interface, interaction with the user is completed through playing animation and interactive text based on information that interaction is required, and after the interaction is completed, the virtual object may be controlled to retreat from the upper right corner.
According to the user guiding method based on the virtual object, good application use habits of users (such as children) are guided to be formed in a mode of cultivating the virtual object (such as a virtual pet) in the application, compared with the existing scheme of forcibly setting the use duration of the application program, the scheme is more friendly and easier to be accepted by the users, and the user can be guided to gradually form good habits of using the application program (such as watching videos through the application) through the interaction of the virtual object and the user.
For the video application programs, according to the scheme of the embodiment of the application, the time length for watching the video and different types of watching the video of the user can influence the attribute information of the virtual object, and the purpose of guiding the user to develop a good video watching habit can be achieved through interaction between the virtual object and the user. In addition, under the condition that the user watches the forward motivational video to promote the growth of the virtual pet, the user (especially children) can be guided to watch more related videos, so that the growth and knowledge accumulation of the user are facilitated. In addition, the user can promote the growth of the pet by watching the video for a certain time, but the growth and the state of the pet are adversely affected by watching the video for a long time, and the attribute information of the pet is influenced by the video watching behavior of the user, so that the video watching behavior of the user can be further guided based on the change of the attribute information of the pet.
The scheme of the embodiment of the application creatively provides a user behavior guiding method based on a virtual object, specifically, as a virtual pet-based viewing habit formation scheme, the virtual object serves as a role of a supervisor and a leader of a user application use behavior, and based on the operation behavior information of the user to the application, the user is guided to form a good application use habit through interaction between the virtual pet and the user (such as a child).
The scheme of the embodiment of the application solves the problem that in the existing scheme, once a user (especially a child) masters a method for removing application use limitation (such as viewing limitation), the use limitation is set to be nearly meaningless. By fostering the virtual pet, the user may be interested in the growth of the pet, and when the user is within certain limits (e.g., a certain length of time) using the application, the virtual object will be growth-motivated, and if the user continues to use the application beyond certain limits, the virtual object will interact with the user, informing the user that the growth of the virtual object will be affected if the user continues to watch it.
According to the actual application demand, the propaganda elements of the brand can be added in the design of the virtual object, and therefore the recognition degree of the brand by a user is improved. Besides adding elements of brands on the appearance of the virtual object, different festival elements can be added according to different festivals, and the attraction of the virtual object is enhanced. In practical applications, in order to make the user pay more attention to the attribute information of the virtual object (such as the growth of a virtual pet), the appearance of the virtual object needs to have a certain attraction for the user, and the communication language between the virtual object and the user needs to be interesting enough. Besides guiding the user to moderately use the application, the user can be guided to watch the content beneficial to the user promotion in the application, such as guiding the child to watch the related scientific knowledge or the related video of the traditional culture, at this time, when the user watches the video of the related category, the virtual object can obtain the growth value, especially for the child, by the scheme of the embodiment of the application, the mode that the existing parents forcibly limit the child to use the application (such as the video watching time length) can be converted into the mode that the child actively nourishes the good application use habit.
In order to better illustrate the solutions provided in the examples of the present application, alternative embodiments of the examples of the present application are further described below with reference to specific examples.
Examples of the invention
In this example, the virtual object is a virtual pet, and the application is a video application (i.e., an application having a video playing function, which may be referred to as an application for short). The information of the operation behavior of the user on the application program, which needs to be acquired in this example, includes the duration of the video watched by the user in one day, that is, the video playing duration, and the type of the video watched by the user. The terminal device of the user is exemplified by a smart phone. After the user installs the video application program on his smartphone, the virtual pet getting action can be completed by opening the application for the first time, and the specific manner in which the user gets the virtual pet can refer to the corresponding description in the foregoing (the virtual pet getting manner shown in fig. 3). Fig. 5 is a flow chart of the virtual pet-based user guidance method in the present example, and as shown in the figure, the method mainly includes the following steps:
step S50: the user plays the video;
specifically, in this step, after the user opens the video application program on the smartphone, a video playing operation is performed, such as clicking a certain video.
Step S51: determining whether the user has received the virtual pet;
after receiving the operation of playing the video by the user, it may be determined whether the user has accepted the virtual pet, if it is determined that the user has accepted the virtual pet, step S52 may be performed, and if the user does not accept the virtual pet, the method may be ended. Of course, if the user does not accept the virtual pet when the application is opened for the first time or before the video is played this time, or the user may be judged whether the user has accepted the virtual pet when the video is played this time, if the user does not accept the virtual pet, the user may be prompted to accept the virtual pet, and if the user does not accept the virtual pet this time, the flow of the scheme of the embodiment of the present application is ended. After the user gets the virtual pet, when the user opens the application for the first time every day, the virtual pet can interact with the user in the modes of animation, voice and the like, and the virtual pet disappears and is hidden after the interaction is finished.
Step S52: recording video playing duration and the played video category;
when a user opens a certain video to watch, the virtual pet can start to record the video watching time of the user, namely the playing time length, after the user stops watching or watches a section of video, the virtual pet can stop recording the video watching time of the user, and meanwhile, different interactions can be carried out with the user according to the difference of the current total watching time.
Step S53: judging whether the playing time length exceeds the limit (namely the set time length);
for example, in the process of playing a video, it may be determined whether the playing duration, i.e., the user's and video watching duration, is limited in duration, and corresponding processing may be performed according to the determination result. For example, if the viewing time does not exceed the limit duration, the process may proceed to step S55, and if the viewing time exceeds the limit duration, the process may proceed to step S54.
Step S54: increasing the growth value of the virtual pet;
specifically, if the viewing time does not exceed the limit duration, the growth value of the virtual pet increases as the viewing duration increases, and in addition, the virtual pet can be displayed through animation and voice prompt: the pet growth promoting device can 'see the video of xx minutes and feel that the pet grows up and down', and can also promote the growth value of the pet on the premise of not exceeding the limitation of watching duration.
Step S55: pausing the playing, and prompting that the watching duration exceeds the limit by the virtual pet;
specifically, if the watching duration exceeds the limit duration, the video playing can be paused (or not paused), the virtual pet interacts with the user through animation, voice and other manners, and prompts the user that the watching duration exceeds the limit, such as prompting the user: "today, rather than having a break in a bar for a long time", the user may be further prompted whether to continue playing, i.e., the process proceeds to step S56.
Step S56: prompting the user whether to continue playing;
after the prompt, if the user chooses to stop playing, the video can be stopped, the video can be closed, and the process ends. If the user chooses to continue playing, the user can be prompted to have a bad effect on the virtual pet if playing continues, and the prompt to continue playing as shown in the figure can cause the growth value of the user's virtual pet to decrease. In practical application, a plurality of video watching limit durations can be set, after one limit duration is exceeded each time, if the user selects to continue playing, the video playing duration of the user is continuously recorded, and when the playing duration exceeds the next limit duration, prompt guidance is continuously performed on the user through interaction between the virtual pet and the user.
As an alternative, fig. 6 shows a frame schematic diagram of an implementation of a data processing manner when a virtual pet interacts with a user according to an embodiment of the present application, and the structures of the parts shown in the diagram are described as follows:
where, Pet is an abstraction of the virtual Pet, and mainly records the related attributes of the Pet, including nickname, appearance, growth value (growth point), state (state), and so on.
The growth manager is responsible for the growth value calculation logic of the virtual pet, including but not limited to the update of the growth value of the virtual pet (updatePetGrowth shown in the figure), the acquisition of video classification information (videogrowing mapping), the communication with the usernactivator for receiving video viewing completion information (notifyVideoCompleted) and video closing information (notifyVideoClosed), and the like.
UserActionManager is the entry for user interaction, and is responsible for user behavior identification and recording (onUserEvent).
The PetManager is responsible for determining whether the pet needs to be awakened currently to interact with the user according to the state of the virtual pet and the behavior of the user, and issuing specific interaction content (interaction).
The PetViewer is an implementer of interaction between the virtual pet and the user, and completes pet interaction in different scenes (Scene 1, Scene2 and the like shown in the figure) according to a specific interaction instruction issued by the PetManager.
The scheme provided by the embodiment of the present application is further described with reference to fig. 6, and the data processing flow involved in the interaction flow of the virtual pet and the user includes:
1. video classification information preloading: when the application is installed for the first time, the application requests the latest version of video classification information (including the mapping between the forward excitation video and the corresponding growth factors and the data version number of the video classification information) through the growth manager and stores the latest version of video classification information in the local, and then each time the application is started, whether the local video classification information is the latest version can be judged, and if not, the new version of video classification information can be requested, and the local data can be updated.
2. User behavior identification and recording, namely acquisition of operation behavior information: the operation behaviors of the users in the application are uniformly sent to the UserActionManager, the UserActionManager identifies and records the playing behaviors of the users, when the UserActionManager identifies that the current user plays a video, a Timer (Timer shown in the figure) is started, the Timer can accumulate the total playing time of the user on the day every preset time (such as 5s), and meanwhile, the growth value of the virtual pet is updated through the GrowthManager according to the total playing time and the type of the currently viewed video. When the user watches the video or closes the video, the timer stops working, and simultaneously the GrowthManager is informed that the user operation is terminated.
3. Interaction of the pet with the user: the growth manager informs the PetManager through broadcasting to finish updating the growth value of the virtual pet, when the growth value of the virtual pet reaches a certain critical value, the PetManager can finish interaction with the user through the PetViewer, similarly, when the fact that the watching duration of the video of the user exceeds a specified duration or the watching action of the user is finished is identified, the PetManager issues an interaction instruction, and the PetViewer finishes specific user interaction, such as interaction in modes of animation (cartoon), voice (voice), words (words) and the like.
Based on the same principle as the method shown in fig. 2, the embodiment of the present application further provides a virtual object-based user guidance apparatus, as shown in fig. 7, the user guidance apparatus 100 may include a behavior information obtaining module 110, an interaction information obtaining module 120, and an interaction module 130. Wherein:
a behavior information obtaining module 110, configured to obtain operation behavior information of an application program, where a virtual object is provided in the application program;
the interaction information obtaining module 120 is configured to obtain first interaction information corresponding to the operation behavior information according to the operation behavior information;
and the interaction module 130 is configured to provide the first interaction information to the user through the virtual object of the user, so as to guide the user through the first interaction information.
Optionally, the operation behavior information includes usage information of a specified function in the application program by the user.
Optionally, the application includes a video application, and the designated function includes a video playing function.
Optionally, the operation behavior information includes at least one of a video playing time length and a video type of a played video.
Optionally, when the interaction information obtaining module obtains the first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
Optionally, the apparatus further comprises:
and the attribute information updating module is used for updating the attribute information of the virtual object of the user according to the operation behavior information.
Optionally, if the application includes a video application, and the operation behavior information includes a video type of the video to be played, the attribute information updating module is specifically configured to, when updating the attribute information of the virtual object of the user according to the operation behavior information:
and updating the attribute information of the virtual object of the user according to the video type and the attribute updating factor coefficient corresponding to the video type.
Optionally, the attribute information includes at least one of the following information:
growth value, level, status, skill.
Optionally, the interaction information obtaining module is further configured to:
when the attribute information meets a preset second interaction triggering condition, acquiring second interaction information corresponding to the attribute information;
and interacting with the user through the virtual object of the user based on the second interaction information.
Optionally, when the interaction information obtaining module obtains the first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
searching corresponding first interaction information in a pre-configured interaction information database according to the operation behavior information; alternatively, the first and second electrodes may be,
and generating first interaction information according to the operation behavior information.
Optionally, the operation behavior information includes a user operation instruction, and when the interaction information obtaining module obtains first interaction information corresponding to the operation behavior information according to the operation behavior information, the interaction information obtaining module is specifically configured to:
and analyzing the user operation instruction, and generating first interaction information corresponding to the user operation instruction according to an analysis result of the user operation instruction.
Optionally, when the interaction module provides the first interaction information to the user through the virtual object of the user, the interaction module is specifically configured to:
providing the first interaction information to the user through the virtual object of the user based on at least one of the following ways:
animation, voice, text.
Optionally, the default display state of the virtual object is a hidden state, and the apparatus further includes:
and the virtual object display control module is used for controlling the virtual object of the user to be displayed on the display interface of the application program when the preset display condition is met.
Optionally, the display condition includes when interacting with the user through the virtual object of the user.
Optionally, the virtual object comprises a virtual pet.
Optionally, the apparatus further includes an object information setting module, where the object information setting module is configured to:
receiving a setting operation of a user on object information of a virtual object of the user;
and updating the object information according to the setting operation.
Optionally, the apparatus further comprises:
and the video information acquisition module is used for acquiring and storing the mapping relation between each video type and the attribute update factor coefficient corresponding to each video type.
Since the apparatus provided in the embodiment of the present invention is an apparatus capable of executing the method in the embodiment of the present invention, a specific implementation manner of the apparatus provided in the embodiment of the present invention and various modifications thereof can be understood by those skilled in the art based on the method provided in the embodiment of the present invention, and therefore, a detailed description of how to implement the method in the embodiment of the present invention by the apparatus is not provided herein. The apparatus used by those skilled in the art to implement the method of the embodiments of the present invention is within the scope of the present application.
Based on the same principle as the method shown in fig. 2 and the apparatus shown in fig. 7, the present application also provides an electronic device, which includes a memory and a processor; wherein, the memory is configured to store the operation instruction; and the processor is used for calling the operation instruction to execute the method shown in any optional embodiment of the application.
The embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method shown in any optional embodiment of the present application.
As an example, fig. 8 shows a schematic structural diagram of an electronic device to which an embodiment of the present application is applicable, and as shown in fig. 8, the electronic device 4000 includes a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004 for communicating with other electronic devices to perform data transceiving. In practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application specific integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (extended industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically erasable programmable Read Only Memory), a CD-ROM (Compact Read Only Memory) or other optical disk storage, optical disk storage (including Compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to.
The memory 4003 is used for storing application codes for executing the scheme of the present application, and the execution is controlled by the processor 4001. Processor 4001 is configured to execute application code stored in memory 4003 to implement what is shown in any of the foregoing method embodiments.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method for guiding a user based on a virtual object is characterized by comprising the following steps:
acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program;
acquiring first interaction information corresponding to the operation behavior information according to the operation behavior information;
providing the first interaction information to the user through the virtual object of the user to guide the user through the first interaction information.
2. The method of claim 1, wherein the operational behavior information comprises usage information of a specified function within the application by the user.
3. The method of claim 2, wherein the application comprises a video-class application and the specified function comprises a video playback function.
4. The method of claim 3, wherein the operation behavior information comprises at least one of a video playing time length and a video type of a played video.
5. The method according to any one of claims 1 to 4, wherein the obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
and when the operation behavior information meets a preset first interaction triggering condition, acquiring first interaction information corresponding to the operation behavior information.
6. The method of any of claims 1 to 4, further comprising:
and updating the attribute information of the virtual object of the user according to the operation behavior information.
7. The method according to claim 6, wherein if the application includes a video-class application and the operation behavior information includes a video type of a video to be played, the updating the attribute information of the virtual object of the user according to the operation behavior information includes:
and updating the attribute information of the virtual object of the user according to the video type and the attribute updating factor coefficient corresponding to the video type.
8. The method of claim 6, wherein the attribute information comprises at least one of the following information:
growth value, level, status, skill.
9. The method of claim 6, further comprising:
when the attribute information meets a preset second interaction triggering condition, acquiring second interaction information corresponding to the attribute information;
and providing the second interaction information to the user through the virtual object.
10. The method according to any one of claims 1 to 9, wherein the obtaining, according to the operation behavior information, first interaction information corresponding to the operation behavior information includes:
searching corresponding first interaction information in a pre-configured interaction information database according to the operation behavior information; alternatively, the first and second electrodes may be,
and generating the first interaction information according to the operation behavior information.
11. The method according to any one of claims 1 to 9, wherein the operation behavior information includes a user operation instruction, and the obtaining first interaction information corresponding to the operation behavior information according to the operation behavior information includes:
and analyzing the user operation instruction, and generating first interaction information corresponding to the user operation instruction according to an analysis result of the user operation instruction.
12. The method according to any one of claims 1 to 8, wherein the providing the first interaction information to the user through the virtual object of the user comprises:
providing the first interaction information to the user through the user's virtual object based on at least one of:
animation, voice, text.
13. A virtual object-based user guidance apparatus, comprising:
the behavior information acquisition module is used for acquiring operation behavior information of a user on an application program, wherein a virtual object is provided in the application program;
the interaction information acquisition module is used for acquiring first interaction information corresponding to the operation behavior information according to the operation behavior information;
and the interaction module is used for providing the first interaction information for the user through the virtual object of the user so as to guide the user through the first interaction information.
14. An electronic device, comprising a memory and a processor;
the memory is configured to store operating instructions;
the processor is used for calling the operation instruction to execute the method of any one of claims 1 to 12.
15. A computer-readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 12.
CN201910272789.9A 2019-04-04 2019-04-04 User guiding method, device, equipment and storage medium based on virtual object Active CN111784271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910272789.9A CN111784271B (en) 2019-04-04 2019-04-04 User guiding method, device, equipment and storage medium based on virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910272789.9A CN111784271B (en) 2019-04-04 2019-04-04 User guiding method, device, equipment and storage medium based on virtual object

Publications (2)

Publication Number Publication Date
CN111784271A true CN111784271A (en) 2020-10-16
CN111784271B CN111784271B (en) 2023-09-19

Family

ID=72754995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910272789.9A Active CN111784271B (en) 2019-04-04 2019-04-04 User guiding method, device, equipment and storage medium based on virtual object

Country Status (1)

Country Link
CN (1) CN111784271B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157952A (en) * 2021-04-29 2021-07-23 北京达佳互联信息技术有限公司 Information display method and device, terminal and server
CN113901239A (en) * 2021-09-30 2022-01-07 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
CN114895970A (en) * 2021-01-26 2022-08-12 博泰车联网科技(上海)股份有限公司 Virtual character growing method and related device
CN115113963A (en) * 2022-06-29 2022-09-27 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium
WO2022242313A1 (en) * 2021-05-19 2022-11-24 腾讯科技(深圳)有限公司 Control method and apparatus for application program, device, and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132361A1 (en) * 2007-11-21 2009-05-21 Microsoft Corporation Consumable advertising in a virtual world
CN103327406A (en) * 2013-05-27 2013-09-25 中山大学 Digital television anti-addiction method and system based on android system
US20140208239A1 (en) * 2013-01-24 2014-07-24 MyRooms, Inc. Graphical aggregation of virtualized network communication
US20160357491A1 (en) * 2015-06-02 2016-12-08 Canon Kabushiki Kaisha Information processing apparatus, information processing method, non-transitory computer-readable storage medium, and system
CN107728895A (en) * 2017-10-25 2018-02-23 中国移动通信集团公司 A kind of processing method of virtual objects, device and storage medium
CN108540858A (en) * 2018-04-13 2018-09-14 广东小天才科技有限公司 A kind of method, apparatus and equipment that prevent user from indulging TV programme
WO2018225149A1 (en) * 2017-06-06 2018-12-13 マクセル株式会社 Mixed reality display system and mixed reality display terminal
CN109547633A (en) * 2018-11-26 2019-03-29 努比亚技术有限公司 A kind of based reminding method, terminal and computer readable storage medium
CN109562294A (en) * 2016-07-05 2019-04-02 乐高公司 Method for creating virtual objects

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090132361A1 (en) * 2007-11-21 2009-05-21 Microsoft Corporation Consumable advertising in a virtual world
US20140208239A1 (en) * 2013-01-24 2014-07-24 MyRooms, Inc. Graphical aggregation of virtualized network communication
CN103327406A (en) * 2013-05-27 2013-09-25 中山大学 Digital television anti-addiction method and system based on android system
US20160357491A1 (en) * 2015-06-02 2016-12-08 Canon Kabushiki Kaisha Information processing apparatus, information processing method, non-transitory computer-readable storage medium, and system
CN106227329A (en) * 2015-06-02 2016-12-14 佳能株式会社 Information processor, information processing method and system
CN109562294A (en) * 2016-07-05 2019-04-02 乐高公司 Method for creating virtual objects
WO2018225149A1 (en) * 2017-06-06 2018-12-13 マクセル株式会社 Mixed reality display system and mixed reality display terminal
CN107728895A (en) * 2017-10-25 2018-02-23 中国移动通信集团公司 A kind of processing method of virtual objects, device and storage medium
CN108540858A (en) * 2018-04-13 2018-09-14 广东小天才科技有限公司 A kind of method, apparatus and equipment that prevent user from indulging TV programme
CN109547633A (en) * 2018-11-26 2019-03-29 努比亚技术有限公司 A kind of based reminding method, terminal and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895970A (en) * 2021-01-26 2022-08-12 博泰车联网科技(上海)股份有限公司 Virtual character growing method and related device
CN114895970B (en) * 2021-01-26 2024-02-27 博泰车联网科技(上海)股份有限公司 Virtual character growth method and related device
CN113157952A (en) * 2021-04-29 2021-07-23 北京达佳互联信息技术有限公司 Information display method and device, terminal and server
WO2022242313A1 (en) * 2021-05-19 2022-11-24 腾讯科技(深圳)有限公司 Control method and apparatus for application program, device, and computer-readable storage medium
CN113901239A (en) * 2021-09-30 2022-01-07 北京字跳网络技术有限公司 Information display method, device, equipment and storage medium
CN115113963A (en) * 2022-06-29 2022-09-27 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium
CN115113963B (en) * 2022-06-29 2023-04-07 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111784271B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN111784271A (en) User guiding method, device, equipment and storage medium based on virtual object
CN109034115B (en) Video image recognizing method, device, terminal and storage medium
CN108650555B (en) Video interface display method, interactive information generation method, player and server
CN108762843A (en) Preloading method, apparatus, storage medium and the intelligent terminal of application program
CN108829453A (en) Configuration method, device, terminal and the storage medium of sensor
CN112866787B (en) Bullet screen setting method, device and system
CN111343473B (en) Data processing method and device for live application, electronic equipment and storage medium
CN108845840A (en) Management method, device, storage medium and the intelligent terminal of application program sound
CN108776599A (en) Management method, device, storage medium and the intelligent terminal of preloaded applications
CN108710516A (en) Acquisition method, device, storage medium and the intelligent terminal of forecast sample
CN108762836A (en) Management method, device, storage medium and the intelligent terminal of preloaded applications
CN112306321A (en) Information display method, device and equipment and computer readable storage medium
CN113852767B (en) Video editing method, device, equipment and medium
CN112950294B (en) Information sharing method and device, electronic equipment and storage medium
CN111343508B (en) Information display control method and device, electronic equipment and storage medium
WO2023174218A1 (en) Game data processing method and apparatus, and electronic device and storage medium
CN111176600A (en) Video canvas control method, video monitoring device and storage medium
KR20190001776A (en) Apparatus, method and computer program for providing contents during installation of game program
CN114217715A (en) Rich media playing page control method and device, electronic equipment and storage medium
CN114007145A (en) Subtitle display method and display equipment
CN112948017A (en) Guide information display method, device, terminal and storage medium
CN110716679A (en) Application management method, storage medium and electronic device
CN109726267A (en) Story recommended method and device for Story machine
CN112527164B (en) Method and device for switching function keys, electronic equipment and storage medium
CN113473200B (en) Multimedia resource processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40031400

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant