CN112199002B - Interaction method and device based on virtual role, storage medium and computer equipment - Google Patents

Interaction method and device based on virtual role, storage medium and computer equipment Download PDF

Info

Publication number
CN112199002B
CN112199002B CN202011056375.1A CN202011056375A CN112199002B CN 112199002 B CN112199002 B CN 112199002B CN 202011056375 A CN202011056375 A CN 202011056375A CN 112199002 B CN112199002 B CN 112199002B
Authority
CN
China
Prior art keywords
information
interaction
target
task
target virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011056375.1A
Other languages
Chinese (zh)
Other versions
CN112199002A (en
Inventor
付凌燕
贺迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perpect World Animation Beijign Technology Co ltd
Original Assignee
Perpect World Animation Beijign Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perpect World Animation Beijign Technology Co ltd filed Critical Perpect World Animation Beijign Technology Co ltd
Priority to CN202111042523.9A priority Critical patent/CN113760142A/en
Priority to CN202011056375.1A priority patent/CN112199002B/en
Publication of CN112199002A publication Critical patent/CN112199002A/en
Application granted granted Critical
Publication of CN112199002B publication Critical patent/CN112199002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Abstract

The application discloses an interaction method and device based on virtual roles, a storage medium and computer equipment, wherein the method comprises the following steps: responding to an interaction starting instruction between a target user and a target virtual character, and outputting interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction starting instruction; establishing interactive connection between the target user and the target virtual role based on the received feedback information of the interactive starting instruction; and acquiring interactive information between the target user and the target virtual character, and outputting target virtual character interactive information corresponding to the interactive information based on the virtual attribute information of the target virtual character.

Description

Interaction method and device based on virtual role, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interaction method and apparatus based on virtual roles, a storage medium, and a computer device.
Background
With the intellectualization of electronic devices, more and more electronic devices are used for child entertainment and education, such as: learning machines, robots, tablet computers, mobile phones, wearable devices, and the like. Wearable devices generally refer to a portable device that is worn directly on the user or integrated into the user's clothing or accessories. Such as a smart watch, smart bracelet, etc. Wearable equipment is not only a hardware equipment, more through software support with the far-end server carry out the interaction and through the high in the clouds server transfer with mobile terminal such as smart mobile phone interact to realize multiple functions. At present, more and more parents choose to wear wearable equipment for children to keep in touch with children at any time, and wearable equipment of this kind also possesses more functions yet, and among the prior art, parents accessible mobile terminal sets up the task for children and reminds in order to standardize children's life work or study habit at child's wearable equipment end. Of course, in a fixed scenario, devices such as learning machines, cell phones, robots, etc. may also be used to perform learning plans or the development of good habits. However, children have poor self-control ability and heavy playing mind, and the man-machine interaction mode in the prior art is dull, the interaction experience is poor, and the interaction effect needs to be improved.
Disclosure of Invention
According to an aspect of the present application, there is provided a virtual character-based interaction method for a client, including:
responding to an interaction starting instruction between a target user and a target virtual character, and outputting interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction starting instruction;
establishing interactive connection between the target user and the target virtual role based on the received feedback information of the interactive starting instruction;
and acquiring interactive information between the target user and the target virtual character, and outputting target virtual character interactive information corresponding to the interactive information based on the virtual attribute information of the target virtual character.
Optionally, the virtual attribute information includes at least one of sound attribute information, character attribute information, skill attribute information, and growth attribute information corresponding to the target virtual character.
Optionally, before the interactive start instruction includes a first interactive start instruction and the interactive start prompt information corresponding to the target virtual character is output according to the interactive display rule corresponding to the target virtual character, the method further includes: receiving the first interaction starting instruction input by the target user; and/or the presence of a gas in the gas,
before the interactive start instruction includes a second interactive start instruction and the interactive start prompt information corresponding to the target virtual character is output according to the interactive display rule corresponding to the target virtual character, the method further includes: receiving a second interaction starting instruction from a target real role terminal corresponding to the target user, wherein the second interaction starting instruction comprises identification information of the target virtual role; and/or the presence of a gas in the gas,
before the interactive start instruction includes a third interactive start instruction and the interactive start prompt information corresponding to the target virtual character is output according to the interactive display rule corresponding to the target virtual character, the method further includes: and acquiring a third interaction starting instruction generated after the trigger condition corresponding to the interactive information is achieved.
Optionally, the first interaction start instruction is generated by triggering an interaction function corresponding to the target virtual character in a preset contact list, where the contact list includes identification information of the target virtual character and identification information of the real character, and the interaction function at least includes a call function and/or a chat function.
Optionally, when the interaction start instruction includes the first interaction start instruction, the obtaining of the interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on the virtual attribute information of the target virtual character specifically includes:
acquiring problem information input by the target user, and inquiring whether the problem information is matched with a preset system problem, wherein the interactive information comprises the problem information input by the target user;
if the question information is matched with the preset system question, first reply information corresponding to the question information is obtained, the first reply information is expressed based on the virtual attribute information to generate corresponding target virtual character interaction information, and the target virtual character interaction information is output through the target virtual character.
Optionally, the first reply information is determined based on a preset question and answer pair database and/or is determined based on a network search result corresponding to the question information and/or is determined based on a sensing result of a sensing device corresponding to the client.
Optionally, after querying whether the question information matches a preset system question, the method further includes:
and if the question information is not matched with the preset system question, requesting second reply information corresponding to the question information from a target real role terminal corresponding to the target user, expressing the second reply information based on the virtual attribute information to generate corresponding target virtual role interaction information, and outputting the target virtual role interaction information through the target virtual role.
Optionally, the requesting, from the target real role terminal corresponding to the target user, second reply information corresponding to the question information specifically includes:
sending the question information to the target real role terminal, and receiving the second reply information fed back by the target real role terminal; alternatively, the first and second electrodes may be,
and sending the question information to the preset server, and receiving second reply information fed back from the preset server, wherein the preset server is used for determining and feeding back the second reply information matched with the question information based on a server database, or sending the question information to the target real character terminal and forwarding the second reply information fed back by the target real character terminal.
Optionally, when the interaction start instruction includes the second interaction start instruction, the obtaining of the interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on the virtual attribute information of the target virtual character specifically includes:
acquiring information to be forwarded sent by the target real role terminal, wherein the interactive information comprises the information to be forwarded;
expressing the transfer information based on the virtual attribute information of the target virtual character to generate corresponding target virtual character interaction information, and outputting the target virtual character interaction information through the target virtual character.
Optionally, when the interaction start instruction includes the third interaction start instruction, the obtaining of the interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on the virtual attribute information of the target virtual character specifically includes:
acquiring an interaction task between the target user and the target virtual character, wherein the interactive information comprises the interaction task;
outputting virtual execution information of the target virtual character on the interaction task based on the virtual attribute information, acquiring real execution information of the target user on the interaction task, and correcting the virtual execution information in real time according to the real execution information, wherein the target virtual character interaction information comprises the virtual execution information;
and when the real execution information indicates that the target user completes the interaction task and/or the virtual execution information indicates that the target virtual character completes the interaction task, determining that the interaction task is completed.
Optionally, the virtual execution information includes, but is not limited to, a task presentation of the interaction task by the target virtual character, a solution to a problem associated with the interaction task, and a guidance for execution of the interaction task.
Optionally, the obtaining of the actual execution information of the interaction task by the target user specifically includes:
acquiring real action information of the target user through a preset information acquisition device, wherein the real execution information comprises voice information of the target user acquired through a voice acquisition device, and/or video information of the target user acquired through a video acquisition device, and/or fingerprint information of the target user acquired through a biological identification device;
and analyzing the real action information to obtain the real execution information, wherein the real execution information is used for reflecting the execution condition of the target user on the interaction task.
Optionally, the outputting the virtual execution information of the target virtual character on the interaction task based on the virtual attribute information specifically includes:
outputting virtual execution information of the target virtual character on the interaction task based on the virtual attribute information and user attribute information of the target user, wherein the user attribute information includes but is not limited to at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, personality attribute information, skill attribute information and growth attribute information corresponding to the target user.
Optionally, after determining that the interaction task is completed, the method further includes:
and updating the user attribute information of the target user according to the real execution information.
Optionally, before the obtaining of the third interaction start instruction generated after the trigger condition corresponding to the interactable information is fulfilled, the method further includes:
monitoring an interaction task schedule, wherein the interaction task schedule comprises at least one interaction task plan and a target virtual character corresponding to each interaction task plan, and the interaction task plan comprises but is not limited to a learning plan, a sports plan and a living habit development plan;
when a trigger condition corresponding to any interactive task plan in the interactive task plan table is achieved, generating a third interactive starting instruction corresponding to the any interactive task plan, wherein the third interactive starting instruction comprises a target virtual role corresponding to the any interactive task plan;
the acquiring of the interaction task between the target user and the target virtual character specifically includes:
and acquiring the interaction task corresponding to the third interaction starting instruction.
Optionally, before monitoring the interaction task schedule, the method further includes:
and according to plan creation input data, determining the at least one interactive task plan, the target virtual role corresponding to each interactive task plan and the virtual attribute information corresponding to each target virtual role, and determining the interactive task plan table.
Optionally, before monitoring the interaction task schedule, the method further includes:
and determining the at least one interaction task plan, a target virtual role corresponding to each interaction task plan and virtual attribute information corresponding to each target virtual role according to the user attribute information of the target user and a preset interaction task plan database, and determining the interaction task plan table, wherein the interaction task plan is matched with the user attribute information, the target virtual role is matched with the task attribute information corresponding to the interaction task plan and the user attribute information, and the task attribute information comprises a task type and/or a task execution scene and/or task difficulty.
Optionally, the interaction task schedule further comprises a first impersonation credential corresponding to each interaction task plan and/or a second impersonation credential corresponding to the interaction task schedule, the first impersonation credential and/or the second impersonation credential being determined based on plan creation input data and/or being determined based on user attribute information of the target user and/or task attribute information corresponding to the interaction task plan;
after the determination that the interaction task is completed, the method further comprises:
and issuing the first visualization certificate and/or the second visualization certificate corresponding to the interaction task according to a preset certificate issuing rule.
Optionally, the first impersonation credential and/or the second impersonation credential are used to determine achievement information of the target user and/or redeem a virtual resource.
Optionally, the method further comprises:
in the process of outputting the target virtual character interaction information, if a fourth interaction starting instruction is received, acquiring a first priority corresponding to the current interactive information and a second priority corresponding to the fourth interaction starting instruction;
and when the second priority is higher than the first priority, responding to the fourth interaction starting instruction, and outputting interaction starting prompt information of the target virtual role corresponding to the fourth interaction starting instruction.
Optionally, the method further comprises:
acquiring actual behavior information of the target user through a preset information acquisition device, wherein the actual behavior information comprises voice information of the target user acquired through a voice acquisition device and/or video information of the target user acquired through a video acquisition device;
inputting the actual behavior information into a pre-constructed behavior recognition model to obtain a behavior identifier of the actual behavior included in the actual behavior information;
inquiring a preset education resource database to obtain education information corresponding to the behavior identification;
outputting the education information based on the virtual character attribute of the target virtual character currently interacted with the target user or an education character attribute corresponding to an education virtual character corresponding to the education information.
According to another aspect of the present application, there is provided a virtual character-based interaction apparatus for a client, including:
the starting prompt information output module is used for responding to an interaction starting instruction between a target user and a target virtual role and outputting interaction starting prompt information corresponding to the target virtual role according to an interaction display rule corresponding to the target virtual role, wherein the target virtual role corresponds to the interaction starting instruction;
the interaction establishing module is used for establishing the interactive connection between the target user and the target virtual role based on the received feedback information of the interaction starting instruction;
and the interaction information output module is used for acquiring the interactive information between the target user and the target virtual character and outputting the target virtual character interaction information corresponding to the interactive information based on the virtual attribute information of the target virtual character.
Optionally, the virtual attribute information includes at least one of sound attribute information, character attribute information, skill attribute information, and growth attribute information corresponding to the target virtual character.
Optionally, the interaction initiating instruction includes a first interaction initiating instruction, and the apparatus further includes: the first interaction instruction receiving module is used for receiving the first interaction starting instruction input by the target user before the interaction starting prompt information corresponding to the target virtual character is output according to the interaction display rule corresponding to the target virtual character; and/or the presence of a gas in the gas,
the interaction initiating instruction comprises a second interaction initiating instruction, and the device further comprises: a second interaction instruction receiving module, configured to receive a second interaction start instruction from a target real character terminal corresponding to the target user before outputting an interaction start prompt message corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, where the second interaction start instruction includes identification information of the target virtual character; and/or the presence of a gas in the gas,
the interaction initiating instruction comprises a third interaction initiating instruction, and the apparatus further comprises: and the third interactive instruction receiving module is used for acquiring a third interactive starting instruction generated after the trigger condition corresponding to the interactive information is achieved before the interactive starting prompt information corresponding to the target virtual character is output according to the interactive display rule corresponding to the target virtual character.
Optionally, the first interaction start instruction is generated by triggering an interaction function corresponding to the target virtual character in a preset contact list, where the contact list includes identification information of the target virtual character and identification information of the real character, and the interaction function at least includes a call function and/or a chat function.
Optionally, when the interaction start instruction includes the first interaction start instruction, the interaction information output module specifically includes:
the problem matching unit is used for acquiring problem information input by the target user and inquiring whether the problem information is matched with a preset system problem or not, wherein the interactive information comprises the problem information input by the target user;
a first reply information output unit, configured to, if the question information matches the preset format question, obtain first reply information corresponding to the question information, perform expression processing on the first reply information based on the virtual attribute information to generate corresponding target virtual character interaction information, and output the target virtual character interaction information through the target virtual character.
Optionally, the first reply information is determined based on a preset question and answer pair database and/or is determined based on a network search result corresponding to the question information and/or is determined based on a sensing result of a sensing device corresponding to the client.
Optionally, the apparatus further comprises:
and a second reply information output unit, configured to, after querying whether the question information matches a preset system question, request, if the question information does not match the preset system question, second reply information corresponding to the question information from a target real role terminal corresponding to the target user, perform expression processing on the second reply information based on the virtual attribute information to generate corresponding target virtual role interaction information, and output the target virtual role interaction information through the target virtual role.
Optionally, the second reply information output unit is specifically configured to:
sending the question information to the target real role terminal, and receiving the second reply information fed back by the target real role terminal; alternatively, the first and second electrodes may be,
and sending the question information to the preset server, and receiving second reply information fed back from the preset server, wherein the preset server is used for determining and feeding back the second reply information matched with the question information based on a server database, or sending the question information to the target real character terminal and forwarding the second reply information fed back by the target real character terminal.
Optionally, when the interaction start instruction includes the second interaction start instruction, the interaction information output module specifically includes:
a to-be-forwarded information obtaining unit, configured to obtain to-be-forwarded information sent by the target real character terminal, where the interactable information includes the to-be-forwarded information;
and the to-be-forwarded information output unit is used for expressing the forwarding information based on the virtual attribute information of the target virtual character to generate corresponding target virtual character interaction information, and outputting the target virtual character interaction information through the target virtual character.
Optionally, when the interaction start instruction includes the third interaction start instruction, the interaction information output module specifically includes:
the interaction task obtaining unit is used for obtaining an interaction task between the target user and the target virtual role, wherein the interactive information comprises the interaction task;
the interaction task execution unit is used for outputting virtual execution information of the target virtual character on the interaction task based on the virtual attribute information, acquiring real execution information of the target user on the interaction task and correcting the virtual execution information in real time according to the real execution information, wherein the target virtual character interaction information comprises the virtual execution information;
and the interaction task termination unit is used for determining that the interaction task is completed when the real execution information indicates that the target user completes the interaction task and/or the virtual execution information indicates that the target virtual role completes the interaction task.
Optionally, the virtual execution information includes, but is not limited to, a task presentation of the interaction task by the target virtual character, a solution to a problem associated with the interaction task, and a guidance for execution of the interaction task.
Optionally, the interaction task execution unit specifically includes:
the real action acquisition subunit is used for acquiring real action information of the target user through a preset information acquisition device, wherein the real execution information comprises voice information of the target user acquired through a voice acquisition device, and/or video information of the target user acquired through a video acquisition device, and/or fingerprint information of the target user acquired through a biological identification device;
and the real execution information analysis subunit is used for analyzing the real action information to obtain the real execution information, wherein the real execution information is used for reflecting the execution condition of the target user on the interaction task.
Optionally, the interaction task execution unit specifically includes:
and a virtual execution information output subunit, configured to output virtual execution information of the target virtual character on the interaction task based on the virtual attribute information and user attribute information of the target user, where the user attribute information includes, but is not limited to, at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, personality attribute information, skill attribute information, and growth attribute information corresponding to the target user.
Optionally, the apparatus further comprises:
and the user attribute updating module is used for updating the user attribute information of the target user according to the real execution information after the interactive task is determined to be completed.
Optionally, the apparatus further comprises:
a schedule monitoring module, configured to monitor an interaction task schedule before obtaining a third interaction start instruction generated after a trigger condition corresponding to the interactable information is fulfilled, where the interaction task schedule includes at least one interaction task schedule and a target virtual role corresponding to each interaction task schedule, and the interaction task schedule includes, but is not limited to, a learning schedule, a sports schedule, and a lifestyle habit development schedule;
a third interaction instruction generating module, configured to generate a third interaction starting instruction corresponding to any one of the interaction task plans when a trigger condition corresponding to the any one of the interaction task plans in the interaction task plan table is achieved, where the third interaction starting instruction includes a target virtual role corresponding to the any one of the interaction task plans;
the interaction task obtaining unit is specifically configured to obtain the interaction task corresponding to the third interaction starting instruction.
Optionally, the apparatus further comprises:
the first schedule creation module is used for determining the at least one interaction task plan, the target virtual role corresponding to each interaction task plan and the virtual attribute information corresponding to each target virtual role according to plan creation input data before the interaction task plan is monitored, and determining the interaction task plan.
Optionally, the apparatus further comprises:
and a second schedule creation module, configured to determine, before the monitoring of the interactive task schedule, the at least one interactive task schedule, a target virtual role corresponding to each of the interactive task schedules, and virtual attribute information corresponding to each of the target virtual roles according to user attribute information of the target user and a preset interactive task schedule database, and determine the interactive task schedule, where the interactive task schedule is matched with the user attribute information, the target virtual role is matched with the task attribute information corresponding to the interactive task schedule and the user attribute information, and the task attribute information includes a task type and/or a task execution scenario and/or a task difficulty.
Optionally, the interaction task schedule further comprises a first impersonation credential corresponding to each interaction task plan and/or a second impersonation credential corresponding to the interaction task schedule, the first impersonation credential and/or the second impersonation credential being determined based on plan creation input data and/or being determined based on user attribute information of the target user and/or task attribute information corresponding to the interaction task plan;
the device further comprises:
and the certificate issuing module is used for issuing the first impersonal certificate and/or the second impersonal certificate corresponding to the interaction task according to a preset certificate issuing rule after the interaction task is determined to be completed.
Optionally, the first impersonation credential and/or the second impersonation credential are used to determine achievement information of the target user and/or redeem a virtual resource.
Optionally, the apparatus further comprises:
a fourth interactive instruction receiving module, configured to, in an output process of the target virtual character interactive information, if a fourth interactive start instruction is received, obtain a first priority corresponding to the current interactive information and a second priority corresponding to the fourth interactive start instruction;
and the fourth interactive instruction prompt module is used for responding to the fourth interactive starting instruction and outputting interactive starting prompt information of the target virtual role corresponding to the fourth interactive starting instruction when the second priority is higher than the first priority.
Optionally, the apparatus further comprises:
the actual behavior acquisition module is used for acquiring actual behavior information of the target user through a preset information acquisition device, wherein the actual behavior information comprises voice information of the target user acquired through a voice acquisition device and/or video information of the target user acquired through a video acquisition device;
the behavior identifier determining module is used for inputting the actual behavior information into a pre-constructed behavior recognition model to obtain a behavior identifier of the actual behavior included in the actual behavior information;
the education information query module is used for querying a preset education resource database to obtain education information corresponding to the behavior identification;
an educational information output module for outputting the educational information based on the virtual character attribute of the target virtual character currently interacting with the target user or an educational character attribute corresponding to an educational virtual character corresponding to the educational information.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described virtual character-based interaction method.
According to still another aspect of the present application, there is provided a computer device, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the virtual character-based interaction method when executing the program.
By means of the technical scheme, the interaction method and device based on the virtual character, the storage medium and the computer equipment respond to the interaction starting instruction, corresponding interaction starting prompt information is output according to the interaction display rule of the target virtual character corresponding to the interaction starting instruction, the target user is prompted to establish interaction connection with the target virtual character, interaction connection between the target user and the target virtual character is established after feedback information of the interaction starting instruction is received, further, the interactive information between the target user and the target virtual character is obtained, and the target virtual character interaction information corresponding to the interactive information is generated and output by taking the target virtual character as a carrier based on the virtual attribute information of the target virtual character, so that the target user and the target virtual character are prompted to interact. The method and the device for displaying the interactive starting prompt information based on the interactive display rule corresponding to the real role enable the interactive starting prompt information received by the target user to be the same as or similar to the interactive starting prompt information of the real role, output the interactive information of the target virtual role based on the virtual attribute information of the target virtual role, enable the interactive information output by taking the target virtual role as a carrier to be matched with the attribute of the target virtual role, facilitate the target user to accept the virtual role more easily, recognize that the virtual role is a real partner independent of the device, and provide the partner for the target user better by the virtual role.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart illustrating an interaction method based on a virtual character according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating another interaction method based on virtual roles provided in the embodiment of the present application;
FIG. 3 is a flowchart illustrating another interaction method based on virtual roles provided in the embodiment of the present application;
FIG. 4 is a flowchart illustrating another interaction method based on virtual roles provided in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of an interaction apparatus based on a virtual character according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of another virtual character-based interaction device provided in an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In this embodiment, an interaction method based on a virtual character is provided, as shown in fig. 1, the method includes:
step 101, responding to an interaction starting instruction between a target user and a target virtual character, and outputting interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction starting instruction;
the embodiment of the application is mainly applied to a client held by a target user, and the client provides a brand-new intelligent interaction mode, for example, a control user (such as a parent) controls a virtual partner through controlling the client to remotely control the virtual partner, so that the virtual partner (i.e. a virtual character) guides the target user to complete tasks, learn knowledge, develop habits, communicate various ideas and the like in the client held by the target user (such as a child) to meet the requirements of various control users and the target user, and achieve the goals of accompanying, guiding, helping, education, growth and the like of the target user; meanwhile, the target user learns various knowledge or acquires various good habits, ideas, value views and the like under the leading of the virtual partner. In the application of the brand-new intelligent interaction mode, the target user can agree that the virtual partner is a real partner existing outside the intelligent device, and the goals of happiness, health, growth and the like are achieved under the accompanying guidance of the virtual partner. This new intelligent interaction mode is applicable but not limited to: smart watches, smart robots, story tellers, touch and talk pens, computers, tablet computers, and the like. For convenience of understanding and description, the following description of function description and implementation is described by taking the use of a child smart watch as an example, wherein a child is a target user, characters such as "lightning", "rainbow" and the like of members of an animation film "universe guard team" are virtual partners, and parents are control users. And are intended to be broadly generic in the sense that they are not specifically illustrated.
The control user defines: the control user in the brand-new intelligent interaction mode can have different roles according to different application scenes. For example, in a scene of accompanying a parent with a child, the control user is a parent; in the accompanying scene of the teacher and the students, the control user is the teacher; in a scene of accompanying children and old people, the control user is a child or the like.
The target user defines: the target user in the brand-new intelligent interaction mode can be different roles according to different application scenes. For example, in a scene of accompanying parents with children, the target user is a child; in the accompanying scene of the teacher and the students, the target user is the student; in a companion scenario between a child and a parent, the target user is the parent, and the like.
Virtual partner definition: the virtual partner in the brand-new intelligent interaction mode can be different virtual roles according to different application scenes, and the virtual partner is recognized by both a target user and a control user and can enjoy images and sounds. The method is independent of the virtual images or sounds of the control user side and the target user side, and meanwhile, in the application process of a brand-new intelligent interaction mode, the target user can agree that the virtual partner is a partner independent of the real existence outside the client side, so that the requirements of the target user are fully met. For example, in the scene of accompanying parents and children, the animation images which are liked by both parents and children can be selected as virtual partners, for example, storms, rainbows and the like of universe guard troops are used as the virtual partners, so that the children can be more willing to accept during interaction with the virtual partners due to the setting, the children are more happy and not alone under the accompanying of the virtual partners, the interaction and the spiritual trusteeship are realized, various tasks are dynamically completed, and the control user can guide the accompanying more conveniently and more effectively.
The embodiment of the present application may be specifically implemented based on specific software in the client, and specifically may be one software having multiple functions, so as to implement multiple functions provided by the embodiment of the present application, and may also be multiple softwares each implementing different functions. In the above embodiment, the client may respond to an interaction initiation instruction, where the interaction initiation instruction may specifically be an instruction used to establish an interactive connection between the target user and the target virtual character. The interaction starting instruction can be initiated by a target user actively, for example, when a child wants to chat with a virtual partner or wants to play a game with the virtual partner, the interaction starting instruction with a target virtual character can be sent actively, and specifically, a virtual partner head portrait in a contact list in specific software can be clicked or voice-evoked to initiate a voice or video call with the virtual partner; the interactive starting instruction can also be generated by specific software, for example, when the current time reaches the preset learning task starting time in the specific software, the specific software outputs the interactive starting instruction with the target user, wherein the preset learning task corresponds to a specific virtual partner; the interaction starting instruction may also be generated based on a control instruction of a control user, for example, a parent sends a task instruction to a target user client, where the task instruction may carry a virtual partner selected by the parent, for example, the parent sends a task instruction of a write job to the target user client and specifies that the virtual partner "rainbow" accompanies children to write jobs together, or the virtual partner corresponding to the task instruction sent by the parent may be determined in a preset or randomly allocated manner, for example, the task instruction sent by the parent is completed by a "lightning" partner. The client side responds to the interaction starting instruction, determines an interaction display rule of a target virtual character corresponding to the interaction starting instruction, and outputs interaction starting prompt information according to the interaction display rule, for example, the interaction starting instruction is a learning task interaction starting instruction generated by specific software, and the learning task is appointed to be completed by a virtual partner rainbow companion, so that the interaction display rule corresponding to the rainbow of the virtual partner can be obtained, wherein the interaction display rule of the target virtual character is determined based on the interaction display rule of the real character, and the interaction display rule of the target virtual character is accompanied by characteristic information of the target virtual character on the basis of the interaction display rule of the real character, for example, the interaction starting prompt information is an incoming call prompt of the rainbow of the virtual partner, and the display mode of the incoming call prompt can be the same as the incoming call display mode of a real contact person in the address book, however, the head portrait of the caller is replaced by a rainbow head portrait, and the ring tone of the incoming call prompt can be the same as the ring tone of the real contact in the address list, and can also be a special ring tone of the rainbow. So that the interaction-initiating presentation of the virtual character is the same as or similar to the interaction-initiating presentation of the real character (e.g., dad-mom), helping the target user to recognize that the virtual character is a "real" partner that exists outside of the device.
102, establishing interactive connection between a target user and a target virtual role based on received feedback information of an interactive starting instruction;
in the above embodiment, the client may output the interaction start prompting information in a manner of sound, image, vibration, or the like, and receive feedback information of the interaction start instruction, for example, an interaction start instruction initiated by specific software or an interaction start instruction generated based on parental control information, where the interaction start prompting information may be an incoming call prompt of a target virtual character, the feedback information is incoming call answer feedback information of the target virtual character, and for example, for an interaction start instruction initiated by a target user actively, the interaction start prompting information may be sound and display prompt for making a call to the target virtual character, and the feedback information is a response of the target virtual character to a call. After receiving the confirmation feedback information of the interaction initiation command, the target user may start interacting with the target virtual character, for example, chatting with the target virtual character, or completing a task issued by the target virtual character, or completing a predetermined task together with the target virtual character, and so on.
Step 103, obtaining the interactive information between the target user and the target virtual character, and outputting the target virtual character interactive information corresponding to the interactive information based on the virtual attribute information of the target virtual character.
In the above embodiment, after the interactive connection between the target user and the target virtual character is established, the interactive information between the target user and the target virtual character should be further acquired, and for the interactive start instruction initiated by the specific software or the interactive start instruction generated based on the control information of the parent corresponding to the described scene, the interactive information may be an interactive task preset in the specific software or an interactive task arranged by the parent, and for the interactive start instruction actively initiated by the target user, the interactive information may be text information, voice information, picture information, video information, and the like input by the target user. Further, according to the virtual attribute information of the target virtual character and the above interactive information, the client may output the target virtual character interactive information corresponding to the interactive information by using the target virtual character as a carrier, for example, chatting with children by using the character, tone and kiss of the virtual partner "rainbow", completing a rope skipping task by using the image of the virtual partner "lightning" and the children, and the like, so that the target virtual character interactive information output by the client by using the "identity" of the target virtual character is close to the "personal setting" of the target virtual character, and the target user is helped to recognize that the virtual character is a "real" partner independent of the device.
By applying the technical scheme of the embodiment, in response to the interaction starting instruction, according to the interaction display rule of the target virtual character corresponding to the interaction starting instruction, the corresponding interaction starting prompt information is output, the target user is prompted to establish the interaction connection with the target virtual character, the interaction connection between the target user and the target virtual character is established after the confirmation feedback information of the interaction starting instruction is received, further, the interactive information between the target user and the target virtual character is obtained, and the target virtual character interaction information corresponding to the interactive information is generated and output by taking the target virtual character as a carrier based on the virtual attribute information of the target virtual character, so that the target user and the target virtual character are prompted to interact. The method and the device for displaying the interactive starting prompt information based on the interactive display rule corresponding to the real role enable the interactive starting prompt information received by the target user to be the same as or similar to the interactive starting prompt information of the real role, output the interactive information of the target virtual role based on the virtual attribute information of the target virtual role, enable the interactive information output by taking the target virtual role as a carrier to be matched with the attribute of the target virtual role, facilitate the target user to accept the virtual role more easily, recognize that the virtual role is a real partner independent of the device, and provide the partner for the target user better by the virtual role.
Further, as a refinement and an extension of the specific implementation of the above embodiment, in order to fully illustrate the specific implementation process of the embodiment, the following several embodiments of the interaction method based on virtual roles are provided.
The first mode, as shown in fig. 2, includes:
step 201, receiving a first interaction starting instruction input by a target user;
in the embodiment of the application, specifically, the interaction starting instruction includes a first interaction starting instruction, and the first interaction starting instruction is generated by triggering an interaction function corresponding to a target virtual character in a preset contact list, where the contact list includes identification information of the target virtual character and identification information of a real character, and the interaction function at least includes a call function and/or a chat function.
In the above embodiment, in the practical application scenario, the first interaction starting instruction may be an instruction initiated by the target user to perform a voice call, a video call, or a dialogue box chat with the target virtual character, for example, a child initiates a voice call by clicking a avatar of a virtual buddy "rainbow" in the contact list, and this action may be used as the first interaction starting instruction. In addition, it should be noted that, in the preset contact list, the target virtual character and the real character exist in parallel, and the display mode and the interaction function triggering mode of the target virtual character and the real character in the contact list are the same, so that the target user has an immersive experience when initiating the interaction with the virtual character and feels the "reality" of the virtual partner.
Step 202, responding to an interaction starting instruction between a target user and a target virtual character, and outputting interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction starting instruction;
in step 202, in response to the first interaction start instruction, outputting interaction start prompt information corresponding to the target virtual character, where the interaction start prompt information is determined based on an interaction display rule corresponding to the target virtual character, and the interaction display rule is determined depending on an interaction display rule corresponding to the real character, that is, the interaction start prompt information corresponding to the target virtual character is the same as the display rule of the interaction start prompt information corresponding to the real character, and only information capable of representing the character identity in the display content is replaced with identity information of the target virtual character, where the identity information may include avatar information, nickname information, phone number information, and the like. For example, in a scenario where a child initiates a voice call with a virtual partner "rainbow," a display interface for calling (or initiating a voice call in communication software) the "rainbow" is shown and a call alert tone is output.
Step 203, establishing interactive connection between the target user and the target virtual character based on the received feedback information of the interactive starting instruction;
when a confirmation feedback message to the interaction initiation command (specifically, the first interaction command in this embodiment) is received, in step 203, for example, the child makes a call to the virtual partner "rainbow", the user can answer the call by inquiring the rainbow according to the data setting of the rainbow (for example, if the rainbow is preset to be in a learning state of 8: 00-10: 00 in the morning, the user can not answer the call if the child calls the rainbow in the time period, and can receive confirmation feedback information if the user does not call the rainbow in the time period), the query result may be used as the feedback information of the interaction initiation instruction, and further, the interaction connection between the target user and the target virtual character is established based on the feedback information, specifically, the call between the child and the rainbow is opened in the above example.
Step 204, acquiring question information input by a target user, and inquiring whether the question information is matched with a preset system question, wherein the interactive information comprises the question information input by the target user;
in step 204, after the interactive connection between the target user and the target virtual character is established, the question information input by the target user is obtained, for example, in a scenario where a child makes a call to a virtual partner "rainbow" for chatting, the question information is the chat content of the child. The child watch end is internally provided with a voice recognition function module, chat contents collected by the voice collection device are converted into computer recognition language types such as characters or codes through the voice recognition function module, and whether the chat contents of the child are matched with preset system questions or not is further inquired, wherein the preset system questions comprise preset answer questions, such as 'how to speak with English on a day of the week', and also comprise some questions capable of obtaining answers through network search or other ways, such as 'how to speak in the day'.
Step 205, if the question information matches the preset system question, obtaining first reply information corresponding to the question information, performing expression processing on the first reply information based on the virtual attribute information to generate corresponding target virtual character interaction information, and outputting the target virtual character interaction information through the target virtual character.
In step 205, when the question information input by the target user matches the preset-format question, the first response information corresponding to the question information may be directly obtained in a preset manner, and specifically, the first response information is determined based on the preset question and answer and/or is determined based on the network search result corresponding to the question information and/or is determined based on the sensing result of the sensing device corresponding to the client. Taking the input of "how the weather is today" as an example, the first reply information may be obtained by searching the weather forecast of the current day through the internet, or the sensing data may be obtained by a temperature sensing device, a humidity sensing device, and the like built in or associated with the child watch, so that the first reply information is determined based on the sensing data.
Further, after the first reply information is obtained, in order to express a real interaction effect between the target user and the target virtual character, the first reply information is expressed based on virtual attribute information corresponding to the target virtual character, and target virtual character interaction information matched with the virtual attribute information of the target virtual character is generated, so that the target virtual character is used as an output carrier of the information, and the information is output in the identity of the target virtual character. The virtual attribute information may include preset virtual information of an acoustic model corresponding to the target virtual character, such as a preset sound velocity, a preset volume, a preset pitch, a preset tone, a preset intonation, and a preset prosody rhythm, and basic speech synthesis information, such as a preset sound velocity, a preset volume, a preset pitch, and the like.
The following is an illustration of the implementation of the above expression process: acquiring an acoustic model from a preset acoustic model library according to identification information corresponding to a target virtual character, wherein the preset virtual information of the acoustic model comprises a plurality of preset sound velocity, preset volume, preset pitch, preset tone, preset language and preset rhythm; and performing voice synthesis on the first reply information through the acoustic model.
The acoustic model library may include a plurality of acoustic models, such as a general acoustic model, and several personalized acoustic models corresponding to the target virtual character. The acoustic models can be neural network models, and the neural network models can be trained by different linguistic data in advance. For each acoustic model, each acoustic model corresponds to respective preset information, that is, each acoustic model is bound with a specific preset information. These preset information may be the basic input information for the acoustic model. For example, the preset information of the generic acoustic model may include two or more of a preset sound velocity, a preset volume, a preset pitch, a preset timbre, a preset intonation, and a preset prosodic rhythm of the model; the preset information of the personalized acoustic model can include other personalized information besides two or more of preset sound velocity, preset volume, preset pitch, preset tone, preset intonation and preset rhythm of the model, such as language style characteristics including a spoken Buddhist, a response mode to a specific scene, an intelligent type, a personality type, a mixed popular language or dialect, a title to a specific character and the like. It should be understood that preset information of different acoustic models, such as preset sound velocity, preset volume, preset pitch, preset timbre, preset intonation, preset rhythm, and the like, are different from each other, for example, the preset information of the personalized acoustic model may be obviously different from the preset information of the general acoustic model. In the embodiment of the application, the acoustic model can convert the reply text into the reply voice.
It should be noted that the step of performing expression processing on the first reply information to obtain a new target virtual character interaction may be completed in the child watch, or the child watch may send the first reply information to a preset server, implement expression processing by using the strong data processing capability of the preset server, and then send the obtained target virtual character interaction information to the child watch, and output the target virtual character interaction information by the child watch terminal. The execution subject of the expression processing step may be specifically determined by combining the processing capability, the network status, and the information privacy level of the child watch, which is not limited herein.
And step 206, if the question information is not matched with the preset system question, requesting second reply information corresponding to the question information from a target real role terminal corresponding to the target user, expressing the second reply information based on the virtual attribute information to generate corresponding target virtual role interaction information, and outputting the target virtual role interaction information through the target virtual role.
In step 206, when the question information input by the target user does not match the preset standard question, a reply to the question information needs to be requested from a preset target real character terminal corresponding to the target user, and in an actual application scenario, the target real character terminal may be a smart device held by a parent, for example, a child inputs a non-standard question "why another child does not like and plays with me", and then the child watch terminal may request a reply to the question from the smart device held by the parent.
Specifically, the question information is sent to the target real role terminal, and second reply information fed back by the target real role terminal is received; or, the question information is sent to a preset server, and second reply information fed back by the preset server is received, wherein the preset server is used for determining and feeding back the second reply information matched with the question information based on a server database, or sending the question information to the target real character terminal and forwarding the second reply information fed back by the target real character terminal.
In the above embodiment, the question information may be sent to the preset server, the preset server may directly forward the question to the parent, or query an answer matched with the question information based on the question database so as to send the question information and the queried answer to the parent together, and after the parent determines a response to the question information, the preset server forwards the response content, or the parent directly sends the response content to the child watch terminal. In addition, in order to protect the privacy of the user, the question information may be directly transmitted to an intelligent device held by the parent, and second reply information fed back by the parent based on the question information may be received.
Further, after the second reply information is obtained, in order to express a real interaction effect between the target user and the target virtual character, the first reply information is expressed based on the virtual attribute information corresponding to the target virtual character, and the target virtual character interaction information matched with the virtual attribute information of the target virtual character is generated, so that the target virtual character is used as an output carrier of the information, and the information is output in the identity of the target virtual character. For a specific expression processing manner of the second reply information, reference may be made to the above expression processing manner of the first reply information, which is not described herein again.
In addition, in order to avoid the problem that the target user cannot timely obtain the feedback of the target virtual character due to the fact that the time consumed by the second reply information request process and the expression processing process of the second reply information is long, the embodiment of the present application further includes: and if the problem information is not matched with the preset system problem, outputting waiting prompt information corresponding to the target virtual role. For example, the virtual partner "rainbow" says "worried about" in his personalized mood, let i think of a thought ".
The second mode, as shown in fig. 3, includes:
step 301, receiving a second interaction starting instruction from a target real role terminal corresponding to a target user, wherein the second interaction starting instruction includes identification information of a target virtual role;
in step 301, the interaction start instruction may be a second interaction start instruction sent by the target real character terminal, and the second interaction start instruction carries identification information for characterizing a target virtual character, and in an actual application scenario, the second interaction start instruction may be an instruction initiated by the target real character to interact with a target user via the target virtual character, for example, dad calls a son and may select a virtual partner "storm" as an outgoing call display, so that in a real situation, dad calls the son and shows an incoming call at a child watch terminal, dad can say that dad wants to say the son but is inconvenient to say the son by the character and identity of dad, and say the son by the identity of the virtual partner.
It should be noted that the second interaction start instruction may be a real-time instruction of the target real character terminal, or may be an instruction preset by the target real character terminal, for example, a parent may be busy working and may not have time to remind a child at home to eat at lunch time, and then a noon 11 may be set: 30, and recording a reminding content and a target virtual character in advance, wherein the reminding content is that for example, lunch is put in a refrigerator and can be eaten after being heated for 3 minutes by a microwave oven, and the target virtual character is set to be rainbow, at noon 11: at 30, the child watch end outputs rainbow incoming call information.
Of course, for the preset timed reminding task, the preset timed reminding task can also be directly output in the way of the parent calling, and the reminding content pre-recorded by the parent can be directly output.
Step 302, responding to an interaction starting instruction between a target user and a target virtual character, and outputting interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction starting instruction;
in step 302, in response to the second interaction start instruction, outputting an interaction start prompt message corresponding to the target virtual character, where the interaction start prompt message is determined based on an interaction display rule corresponding to the target virtual character, and the interaction display rule is determined depending on an interaction display rule corresponding to the real character, that is, the interaction start prompt message corresponding to the target virtual character is the same as the display rule of the interaction start prompt message corresponding to the real character, but information capable of representing the character identity in the display content is replaced by identity information of the target virtual character, and the identity information may include avatar information, nickname information, phone number information, and the like. For example, in a scene that a parent initiates a voice call with a child via a virtual partner "rainbow", a display interface of a "rainbow" incoming call is displayed and an incoming call ringtone or vibration prompt of the "rainbow" incoming call is output.
Step 303, establishing an interactive connection between the target user and the target virtual character based on the received feedback information of the interactive start instruction;
in step 303, when feedback information of an interaction start instruction (specifically, the second interaction instruction in this embodiment) is received, for example, in an application scenario in which a virtual partner "rainbow" dials a call to a child, the feedback information may be an answering operation of the call by the child, and further, an interactive connection between the target user and the target virtual character is established based on the feedback information, specifically, a call between the child and the "rainbow" is opened in the above example.
Step 304, obtaining information to be forwarded sent by a target real role terminal, wherein the interactive information comprises the information to be forwarded;
in step 304, after the interactive connection between the target user and the target virtual character is established, the information to be forwarded from the target real character terminal is obtained, for example, the information to be forwarded may be determined based on the voice information and the text information input in real time by the target virtual character, or may be determined based on the reminding content corresponding to the timing reminding task preset by the target virtual character. In a scenario where the target real character interacts with the target user via the target virtual character, for example, a scenario where a parent calls a child via a virtual partner "rainbow", the information to be relayed may be the parent's voice input content or text input content.
And 305, expressing the transfer information based on the virtual attribute information of the target virtual character to generate corresponding target virtual character interaction information, and outputting the target virtual character interaction information through the target virtual character.
In step 305, in order to express the "real" interaction effect between the target user and the target virtual character, the information to be forwarded is expressed based on the virtual attribute information corresponding to the target virtual character, and target virtual character interaction information matching the virtual attribute information of the target virtual character is generated, so that the information is output with the target virtual character as an output carrier of the information and with the identity of the target virtual character. For a specific expression processing manner of the information to be forwarded, reference may be made to the above expression processing manner of the first reply information, which is not described herein again.
The third mode, as shown in fig. 4, includes:
step 401, monitoring an interaction task schedule, wherein the interaction task schedule comprises at least one interaction task schedule and a target virtual role corresponding to each interaction task schedule, and the interaction task schedule comprises but is not limited to a learning schedule, a sports schedule and a living habit development schedule;
in step 401, an interaction task schedule may be preset in the child watch terminal, different types of interaction task schedules including a learning schedule, a sports schedule, a lifestyle habit development schedule, and the like may be preset in the interaction task schedule, and each task is provided with a target virtual role correspondingly, and the target virtual role reminds, assists or accompanies a target user to complete the tasks in the schedule, for example, a virtual partner "storm" accompanies a child to complete a sports task, and a virtual partner "rainbow" accompanies a child to complete a learning task.
In addition, the interaction task schedule may be created by a user, or may be newly and automatically generated based on the attribute of the target user, and specifically, before step 401 in this embodiment of the application, the method may further include: step 401-1, or step 401-2.
Step 401-1, according to the plan creation input data, determining at least one interaction task plan, a target virtual role corresponding to each interaction task plan, and virtual attribute information corresponding to each target virtual role, and determining an interaction task plan table.
In the above embodiment, the user may enter a plan entry function of the child watch, and enter plan creation input data, so as to create an interaction task plan table, where the interaction task plan table includes at least one interaction task plan, and further each interaction task plan corresponds to a specific virtual role, and the virtual role corresponding to each interaction task plan may be specified by the user, or may be automatically assigned by the system based on task attributes of the task plan entered by the user and user attributes of the target user, for example, the virtual role corresponding to the sports task may be a virtual partner "storm" that is good at sports.
Step 401-2, determining at least one interaction task plan, a target virtual role corresponding to each interaction task plan, and virtual attribute information corresponding to each target virtual role according to the user attribute information of the target user and a preset interaction task plan database, and determining an interaction task plan table, wherein the interaction task plan is matched with the user attribute information, the target virtual role is matched with the task attribute information corresponding to the interaction task plan and the user attribute information, and the task attribute information comprises a task type and/or a task execution scene and/or task difficulty.
In the above embodiment, the user attribute information includes, but is not limited to, at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, character attribute information, skill attribute information, and growth attribute information corresponding to the target user, a preset interactive task plan database established in conjunction with big data or an expert system according to the user attribute information, for example, according to big data statistical analysis, a user with attribute 1 (such as grade 1 of primary school, girl, lively character) tends to select the plan table 1, a user with attribute 2 (such as kindergarten, middle school, and sexually oriented) tends to select the plan table 2, then a plan table matching the user attribute information of the target user with the highest degree can be searched in the preset interactive task plan database for recommendation, or according to a plan recommendation of the expert system pre-stored in the database for the target user with the attribute, and recommending the interaction task schedule to the target user. In addition, the interactive task schedule recommended by the system or generated by the system can be changed to obtain a task schedule more suitable for the target user, so that the target user can complete the total tasks of the schedule with the accompanying of the virtual role, and the growth of health and pleasure can be realized. In addition, the people selection of the target virtual character can be determined based on task attributes and user attributes, for example, the virtual character is strong in storm movement and better in movement attribute, the storm is selected to accompany the movement of children, for children with strong victory or defeat, the movement potential of the children can be excited under the accompany of 'movement health', and for children with poor movement and strong self-mutism psychology, the children are easy to lose confidence.
Step 402, when a trigger condition corresponding to any interactive task plan in the interactive task plan table is achieved, generating a third interactive starting instruction corresponding to any interactive task plan, wherein the third interactive starting instruction comprises a target virtual role corresponding to any interactive task plan;
step 403, responding to an interaction start instruction between a target user and a target virtual character, and outputting interaction start prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, wherein the target virtual character corresponds to the interaction start instruction, and the interaction display rule corresponding to the target virtual character is determined based on the interaction display rule corresponding to the real character;
in steps 402 to 403, each interactive task plan in the interactive task plan table further corresponds to a trigger condition, for example, the interactive task plan table includes a rope skipping task plan, and the trigger condition of the rope skipping task plan is 4: 00 trigger, then time is detected to reach 4 pm: and 00, generating a third interactive starting instruction corresponding to the rope skipping task plan, wherein the third interactive starting instruction comprises a virtual partner storm corresponding to the rope skipping task plan, so that the child watch end responds to the third interactive starting instruction when acquiring the third interactive starting instruction. The method specifically includes outputting corresponding interaction start prompting information according to an interaction display rule corresponding to the target virtual character, for example, displaying a storm call interface and outputting a storm call ringtone, and also outputting corresponding interaction start prompting information according to the interaction display rule and a specific task type of the target virtual character, for example, directly outputting a prompting message of a virtual partner 'storm' to prompt a child to complete a rope skipping task together.
Step 404, establishing interactive connection between the target user and the target virtual role based on the received feedback information of the interactive starting instruction;
in step 404, in the above exemplary scenario, the feedback information may be a signal to turn on an incoming call to a virtual buddy, or a signal to accept a rope skipping task input by the target user.
Step 405, acquiring an interaction task corresponding to the third interaction starting instruction, wherein the interactive information comprises the interaction task;
step 406, outputting virtual execution information of the target virtual character on the interaction task based on the virtual attribute information, acquiring real execution information of the target user on the interaction task, and correcting the virtual execution information in real time according to the real execution information, wherein the target virtual character interaction information comprises the virtual execution information;
in the above embodiment, after the interactive connection between the target user and the target virtual character is established, the interactive task corresponding to the third interactive start instruction is obtained, the task corresponding to the target virtual character in the interactive task is analyzed, and the virtual execution information corresponding to the task is output. For example, in the rope skipping task, the virtual partner "storm" outputs voice prompt information "ready to finish the rope skipping task bar together with me, and after the countdown is finished," storm "jumps with the child, video information is collected by a camera or rope skipping data is sensed by a sensor, for example, 200 rope skipping tasks are performed, the speed of the child is slowed down or stopped when the child jumps to 150," storm "can output voice information" refuels and then "keep fast", or the child can finish 200 rope skipping, "storm" can accelerate rope skipping, and the child is stimulated to jump more by "action".
In the embodiment of the application, the output mode of the virtual execution information is not limited to voice output, video output and the like, and the virtual execution information can also be output through the intelligent wearable device, for example, VR glasses are used for children to see the virtual execution information of the target virtual character after wearing the VR glasses.
In addition, in the embodiment of the present application, in order to better achieve the purpose of accompanying the growth of the child, and in order to make the virtual character more realistic, in particular, the method may further include:
and step 406-1, outputting virtual execution information of the target virtual character on the interaction task based on the virtual attribute information and the user attribute information of the target user.
In the above embodiment, the virtual execution information may be determined based on the virtual attribute information corresponding to the virtual character and the user attribute information of the target user, for example, the target user is a student of the first grade of primary school, and when the virtual partner "rainbow" accompanies a child to learn ancient poems, the ancient poems matched with the first grade of primary school in the knowledge base corresponding to the "rainbow" may be queried to read aloud. In addition, the user attribute information can further comprise a user knowledge base, and when the rainbow accompanying children learn ancient poems, the ancient poems mastered by the children can not be repeatedly learned.
Further, the real execution information in step 406 may be obtained by:
step 406-2, acquiring real action information of the target user through a preset information acquisition device, wherein the real execution information comprises voice information of the target user acquired through a voice acquisition device, and/or video information of the target user acquired through a video acquisition device, and/or fingerprint information of the target user acquired through a biological identification device;
and 406-3, analyzing the real action information to obtain real execution information, wherein the real execution information is used for reflecting the execution condition of the target user on the interaction task.
In the above embodiment, in the process of executing the interaction task by the target user, data such as voice information, video information, fingerprint information and the like of the target user may be acquired through the voice acquisition device, the video acquisition device, the biometric identification device and the like. For example, in an ancient poetry learning task, reading information of a target user is collected through a voice collecting device, in a dance task, dance video information of the target user is collected through a video collecting device, and in a question answering link of a mathematical learning task, touch information of the target user on a display screen is collected through a biological recognition device. In addition, the preset information acquisition device may also be other devices not listed above, and those skilled in the art may select the preset information acquisition device according to an actual application scenario, which is not limited herein. After the real action information is collected, the real action information is further analyzed to obtain real execution information for reflecting the execution condition of the interactive task by the target user. For example, the follow-up reading information is analyzed from voice to text, the video acquisition data is analyzed to obtain the body action information of the target user, and the touch information is analyzed to determine which content in the display interface is selected by the target user.
Taking the video acquisition data as an example, the implementation manner of analyzing the real action information to obtain the real execution information may specifically include: firstly, determining an action picture of a target user based on video acquisition data; generating skeleton point information corresponding to the action according to the action picture, wherein the skeleton point information comprises: coordinate information, index information, and depth information; and step three, comparing the skeleton point information with the standard skeleton point information of the corresponding action in the unified coordinate system, and outputting a comparison result.
In the dance teaching task, a 3D camera is preferably adopted in the step one to acquire images of the current user. Wherein, this 3D camera can be the camera of carrying on among the smart machine such as children's wrist-watch. Through the 3D camera, the motion picture of the target user which can be acquired by the method preferably comprises a color image and a depth image of the target user. After obtaining the motion picture of the current user, the method preferably generates skeleton point information corresponding to the motion according to the motion picture in step two. Specifically, as shown in fig. 2, in the present embodiment, when the method generates the bone point information corresponding to the action, it is preferable to perform bone point identification on the acquired color map of the current user in step S201, so as to identify each bone point of the current user in the color map, and further obtain the coordinate information of each bone point. The key points of the human skeleton are important for describing the human posture and predicting the human behavior. The detection of key points of human bones, namely, the position Estimation, mainly detects some key points of human bodies, such as joints, five officers and the like, and describes the information of the human bones through the key points. Different human body actions can be represented by using the key points of the human skeleton. In this embodiment, in step two, the method preferably uses a Ground Truth construction idea of Heatmap + Offsets to identify each skeletal point of the current user in the color map. The method preferably distinguishes between different bone points in different forms. For example, in this embodiment, the method preferably uses different colors to distinguish different bone points. And after identifying each bone point in the color image of the current user, extracting the bone point information of the depth image, thereby obtaining the depth information of each bone point based on the depth image of the current user. After obtaining the bone point information of the current user, comparing the bone point information obtained in the step two with the standard bone point information of the corresponding action in the unified coordinate system in the step three, further outputting a comparison result, and taking the comparison result as real execution information.
In any of the embodiments of the present application, the virtual execution information includes, but is not limited to, task presentation of the interactive task by the target virtual character, solution of a problem related to the interactive task, and guidance for execution of the interactive task.
In this embodiment, the virtual character may serve as multiple roles in different interaction tasks, and the specific virtual character may perform task demonstration, answer questions and provide task execution guidance for the interaction tasks, for example, in an AI programming course learning task, the virtual execution information may be a programming demonstration of a target virtual character, an answer to a course question, and in a folded quilt life skill training task, the virtual execution information may be execution guidance for the folded quilt, for example, outputting a voice prompt message "first fold quilt in half … …".
Step 407, when the real execution information indicates that the target user completes the interaction task and/or the virtual execution information indicates that the target virtual character completes the interaction task, determining that the interaction task is completed;
in step 407, it may be determined whether the interaction task is completed based on the real execution information and/or the virtual execution information, for example, the rope skipping task plan is to complete 200 rope skips, the task completion signal may be that the target user completes 200 rope skips, if the child completes 200 rope skips, it may be determined that the rope skipping task is completed, or it may be determined that the rope skipping task is completed when both the child and the virtual partner complete 200 rope skips, in addition, after both the child and the virtual partner complete 200 rope skips, the virtual partner outputs the virtual execution information to summarize the completion condition of the rope skipping task and then determines that the rope skipping task is completed. The specification may be performed according to an actual scene, and is not specifically limited herein.
In addition, in some application scenarios, in order to achieve the forward guiding and motivation effect on children, the capability of the virtual partner may be set to be equal to or slightly lower than that of the children according to the capability of the children, so as to determine the virtual execution information of the virtual partner, for example, for a rope skipping task, the capability of the children is under jump 100, and then the capability of the virtual partner may be set under jump 95, so as to prevent the virtual execution situation of the virtual partner from being too different from the actual execution situation of the children, which results in the loss of confidence and interest of the children in the task.
Step 408, issuing a first visualization certificate and/or a second visualization certificate corresponding to the interaction task according to a preset certificate issuing rule;
in any embodiment of the present application, specifically, the interaction task schedule further includes a first visualization credential corresponding to each interaction task plan and/or a second visualization credential corresponding to the interaction task schedule, and the first visualization credential and/or the second visualization credential are determined based on the plan creation input data and/or are determined based on the user attribute information of the target user and/or the task attribute information corresponding to the interaction task plan; the first impersonation credential and/or the second impersonation credential are used to determine achievement information of the target user and/or redeem the virtual resource.
In the above embodiment, the completion progress of the task plans may be visualized, each task plan in the interactive task plan table corresponds to a first visualization voucher, each plan is issued with a corresponding first visualization voucher when completed, all tasks in the plan table correspond to a second visualization voucher, and all tasks in the plan table are issued with corresponding second visualization vouchers when completed. The first visualization voucher and the second visualization voucher can be used for visually displaying task completion conditions, and can be determined based on user attribute information of a target user and task attribute information of an interaction task plan, for example, the target user is a girl who likes to play puzzles, pink puzzles can be selected as the visualization voucher, the interaction task is a learning task, learning is a process of gradually accumulating knowledge, a building can be selected as the visualization voucher, and the learning process is abstracted into a building construction process. In addition, while the first visualization voucher and the second visualization voucher visually display the task progress, the achievement evaluation of the target user can be performed based on the first visualization voucher and the second visualization voucher obtained by the target user, and the virtual resources can be exchanged by the first visualization voucher and the second visualization voucher, for example, certain visualization vouchers can be accumulated to unlock game functions in the device.
For example, if the interactive task schedule is given a name and a concrete image with meaning, such as a skyscraper name and style, the task is completed by building a skyscraper. The virtual partner can execute a plan customized by a user through a background program according to the time of the plan table, the plan table is set and then sent to the server, the server can call an accompanying object at the end of the accompanying object at specific time according to task requirements, the accompanying object is taken to carry out tasks, one task is completed, a building is built on one floor, the tasks are completed completely, a skyscraper is completed, the accompanying object is enabled to be relatively successful, and in addition, the user side is controlled to visually see the progress condition of the tasks. For example, the first task today is that 8 o 'clock-9 o' clock in the morning recite 2 ancient poems by the rainbow area, and when 8 o 'clock, the backstage server will trigger the phone of rainbow in the schedule, and the children of companion's object end will receive the incoming telegram of rainbow, and after answering the phone, the rainbow can say with children: baby, now together we back ancient poetry bar. The child then begins their task with the rainbow. Answering a telephone voice call may also be a video call. The names of the schedules can be selected according to the interests of children and the types of tasks, such as virtual building, breeding, planting, etc. (skyscrapers, wisdom trees, pet dogs, etc.). The visual representation image of the schedule can be a picture recommended in the child watch and can also be self-defined content. For example, a recommended typical skyscraper style is provided in a child watch, a user can directly select the typical skyscraper style, the user can search for the skyscraper style according to own setting, then selects a favorite picture, and after the user selects the favorite picture, the system can generate a task-materialized skyscraper style according to the picture style. In an actual application scenario, 10 advanced scholars are regarded as a skyscraper task, basic data (previous data, current data of children and the like) are input, a server sets a target value according to data analysis and forms a large task by the target value, the large task is formed by small tasks such as daily learning, exercise, housework and the like, and a building is built every time one task is completed. If the intelligent tree grows, the intelligent tree grows higher from germination to growth according to the growth and development process of the tree, and the intelligent tree grows after the task is completed. In the case of feeding small animals, the beginning of the task is the beginning of life inoculation, and the small animals grow for each task. Therefore, the children can visually see the results after the children try to finish the tasks, and the desire of the children to try to finish the tasks can be stimulated.
And step 409, updating the user attribute information of the target user according to the real execution information.
In step 409, in the process of completing the task, the knowledge of the target user is continuously accumulated, the skill is continuously improved, and the user attribute corresponding to the target user is also continuously changed, for example, in the process of executing the rope skipping task, the rope skipping skill of the child is improved, namely 100 rope skips in the original 1 minute, and 120 rope skips in the current 1 minute, so that the exercise ability attribute of the user can be updated. In addition, after the user attribute information is updated, the interactive tasks in the interactive task schedule can be updated, for example, the target of the rope skipping task is changed to 140 jumps in 1 minute. Thereby helping the target user to make constant progress.
In any embodiment of the present application, specifically, the method further includes:
step 501, in the process of outputting the target virtual character interaction information, if a fourth interaction starting instruction is received, acquiring a first priority corresponding to the current interactive information and a second priority corresponding to the fourth interaction starting instruction;
and 502, when the second priority is higher than the first priority, responding to a fourth interactive starting instruction, and outputting interactive starting prompt information of the target virtual role corresponding to the fourth interactive starting instruction.
In the foregoing embodiment, in the process of outputting the target virtual character interaction information, if a fourth interaction instruction is received, the fourth interaction instruction may be any one of the interaction instructions described above, a first priority corresponding to the fourth interaction instruction is compared with a second priority corresponding to the current interactable information, and when the second priority is higher than the first priority, a response may be made to the second interaction start instruction to prompt the user that a new interaction connection may be established. For example, in the process of chatting between children and virtual partners, the triggering time of the ancient poetry learning task is reached, the priority of the ancient poetry learning task is higher than that of the chatting, and then interactive starting prompt information corresponding to the ancient poetry learning starting instruction is output to prompt that the ancient poetry learning time of the children is reached.
In any embodiment of the present application, specifically, the method further includes:
601, acquiring actual behavior information of a target user through a preset information acquisition device, wherein the actual behavior information comprises voice information of the target user acquired through a voice acquisition device and/or video information of the target user acquired through a video acquisition device;
step 602, inputting actual behavior information into a pre-constructed behavior recognition model to obtain a behavior identifier of an actual behavior included in the actual behavior information;
step 603, querying a preset education resource database to obtain education information corresponding to the behavior identifier;
step 604, outputting the educational information based on the virtual character attribute of the target virtual character currently interacting with the target user or the educational character attribute corresponding to the educational virtual character corresponding to the educational information.
In the above embodiment, the preset information collecting device may be further used for collecting the actual behavior information of the target user, so as to analyze actual behaviors of the target user, such as voice, motion and the like, and judge whether the actual behavior of the target user is a bad habit which needs to be corrected, so as to output the education resources by using the virtual character as a carrier according to the bad habit of the target user. The actual behavior information may be voice information, video information, etc. of the target user, and the voice information is subjected to text conversion to perform actual behavior analysis on the video information (specifically, see the corresponding description in step 407-3), so that a behavior recognition model (e.g., a semantic matching model) is used to match the text conversion result to determine an identifier corresponding to the text conversion result, and an identifier corresponding to the above behavior comparison result is used (e.g., if the comparison result is that the actual behavior information hits the bad sitting posture information, the identifier corresponding to the bad sitting posture is determined), so as to query the corresponding education information, and perform education information output based on a preset education virtual character or a target virtual character currently interacting with the target user. In a specific application scenario, if the education information is voice information, the education information should be expressed first to match the output education information with the attribute information of the virtual character, and if the education information is video information, the education information can be directly played. The manner of performing expression processing on the education information may refer to the above expression processing manner on the first reply information, and is not described herein again.
In a practical application scenario, when a child needs or a parent considers that the child needs, a segment with educational significance in the animation is played, for example, when the child eats and takes a meal, a story segment of the animation with the feathers becoming white due to the taking of the meal can be played, so that the child is guided to get a good meal. The triggering mode is as follows: the watch end can acquire environmental parameters (such as pictures, videos and voices) to analyze and trigger, for example, when a child says that 'I do not love …', the watch end can select corresponding animation to play according to the analysis result; in addition, the method can also be triggered based on the input data of the parents, such as voice input of 'no-picky eating', or the playing of the corresponding scene is triggered by searching the corresponding animation through text/voice input at the control end. In a preset animation education resource library, the whole animation can be segmented in advance, a query index is set for each animation, and specific words, voice, actions and the like are used for marking, so that the animation in the resource library is matched based on the input action identification for playing.
Further, as a specific implementation of the method in fig. 1, an embodiment of the present application provides an interaction apparatus based on a virtual character, and as shown in fig. 5, the apparatus includes:
a starting prompt information output module 701, configured to respond to an interaction starting instruction between a target user and a target virtual character, and output interaction starting prompt information corresponding to the target virtual character according to an interaction display rule corresponding to the target virtual character, where the target virtual character corresponds to the interaction starting instruction;
an interaction establishing module 702, configured to establish an interaction connection between a target user and a target virtual character based on received feedback information for an interaction starting instruction;
the interaction information output module 703 is configured to obtain interactable information between the target user and the target virtual character, and output target virtual character interaction information corresponding to the interactable information based on the virtual attribute information of the target virtual character.
Optionally, the virtual attribute information includes at least one of sound attribute information, character attribute information, skill attribute information, and growth attribute information corresponding to the target virtual character.
Optionally, as shown in fig. 6, the interaction initiating instruction includes a first interaction initiating instruction, and the apparatus further includes: a first interaction instruction receiving module 704, configured to receive a first interaction start instruction input by a target user before outputting an interaction start prompt message corresponding to a target virtual character according to an interaction display rule corresponding to the target virtual character; and/or the presence of a gas in the gas,
the interaction starting instruction comprises a second interaction starting instruction, and the device further comprises: a second interaction instruction receiving module 705, configured to receive a second interaction start instruction from a target real role terminal corresponding to a target user before outputting interaction start prompting information corresponding to a target virtual role according to an interaction display rule corresponding to the target virtual role, where the second interaction start instruction includes identification information of the target virtual role; and/or the presence of a gas in the gas,
the interaction initiating instruction comprises a third interaction initiating instruction, and the device further comprises: the third interaction instruction receiving module 706 is configured to obtain a third interaction start instruction generated after the trigger condition corresponding to the interactive information is achieved before the interaction start prompt information corresponding to the target virtual character is output according to the interaction display rule corresponding to the target virtual character.
Optionally, the first interaction start instruction is generated by triggering an interaction function corresponding to a target virtual character in a preset contact list, where the contact list includes identification information of the target virtual character and identification information of a real character, and the interaction function at least includes a call function and/or a chat function.
Optionally, as shown in fig. 6, when the interaction starting instruction includes a first interaction starting instruction, the interaction information output module 703 specifically includes:
the question matching unit 7031 is configured to obtain question information input by the target user, and query whether the question information matches a preset system question, where the interactive information includes the question information input by the target user;
a first reply information output unit 7032, configured to, if the question information matches the preset-system question, obtain first reply information corresponding to the question information, perform expression processing on the first reply information based on the virtual attribute information to generate corresponding target virtual character interaction information, and output the target virtual character interaction information through the target virtual character.
Optionally, the first reply information is determined based on a preset question and answer pair database and/or based on a network search result corresponding to the question information and/or based on a sensing result of a sensing device corresponding to the client.
Optionally, as shown in fig. 6, the interactive information output module 703 specifically further includes:
a second reply information output unit 7033, configured to, after querying whether the question information matches the preset system question, if the question information does not match the preset system question, request a second reply information corresponding to the question information from a target real character terminal corresponding to the target user, perform expression processing on the second reply information based on the virtual attribute information to generate corresponding target virtual character interaction information, and output the target virtual character interaction information through the target virtual character.
Optionally, as shown in fig. 6, the second reply information output unit 7033 is specifically configured to: sending the question information to a target real role terminal, and receiving second reply information fed back by the target real role terminal; or, the question information is sent to a preset server, and second reply information fed back by the preset server is received, wherein the preset server is used for determining and feeding back the second reply information matched with the question information based on a server database, or sending the question information to the target real character terminal and forwarding the second reply information fed back by the target real character terminal.
Optionally, as shown in fig. 6, when the interaction starting instruction includes a second interaction starting instruction, the interaction information output module 703 specifically includes:
a to-be-forwarded information obtaining unit 7034, configured to obtain to-be-forwarded information sent by the target real character terminal, where the interactable information includes the to-be-forwarded information;
and a to-be-transferred information output unit 7035, configured to express the transfer information based on the virtual attribute information of the target virtual character to generate corresponding target virtual character interaction information, and output the target virtual character interaction information through the target virtual character.
Optionally, as shown in fig. 6, when the interaction starting instruction includes a third interaction starting instruction, the interaction information output module 703 specifically includes:
an interaction task obtaining unit 7036, configured to obtain an interaction task between a target user and a target virtual character, where the interactable information includes the interaction task;
the interaction task execution unit 7037 is configured to output virtual execution information of the target virtual character on the interaction task based on the virtual attribute information, obtain real execution information of the target user on the interaction task, and modify the virtual execution information in real time according to the real execution information, where the target virtual character interaction information includes the virtual execution information;
an interaction task terminating unit 7038, configured to determine that the interaction task is completed when the real execution information indicates that the target user completes the interaction task and/or the virtual execution information indicates that the target virtual character completes the interaction task.
Optionally, the virtual execution information includes, but is not limited to, task presentation of the interactive task by the target virtual character, resolution of questions related to the interactive task, and guidance for execution of the interactive task.
Optionally, not shown in the figure, the interaction task execution unit 7037 specifically includes:
a real action collecting subunit 70371, configured to collect real action information of the target user through a preset information collecting device, where the real execution information includes voice information of the target user collected through the voice collecting device, and/or video information of the target user collected through the video collecting device, and/or fingerprint information of the target user collected through the biometric device;
and the real execution information analyzing subunit 70372 is configured to analyze the real action information to obtain real execution information, where the real execution information is used to reflect an execution situation of the interaction task by the target user.
Optionally, not shown in the figure, the interaction task execution unit 7037 specifically includes:
a virtual execution information output subunit 70373, configured to output virtual execution information of the target avatar for the interactive task based on the virtual attribute information and user attribute information of the target user, where the user attribute information includes, but is not limited to, at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, personality attribute information, skill attribute information, and growth attribute information corresponding to the target user.
Optionally, as shown in fig. 6, the apparatus further includes:
and the user attribute updating module 707 is configured to update the user attribute information of the target user according to the real execution information after determining that the interaction task is completed.
Optionally, as shown in fig. 6, the apparatus further includes:
the schedule monitoring module 708 is configured to monitor an interaction task schedule before obtaining a third interaction start instruction generated after a trigger condition corresponding to the interactable information is achieved, where the interaction task schedule includes at least one interaction task schedule and a target virtual role corresponding to each interaction task schedule, and the interaction task schedule includes, but is not limited to, a learning schedule, a sports schedule, and a lifestyle cultivation schedule;
a third interaction instruction generating module 709, configured to generate a third interaction starting instruction corresponding to any interaction task plan when a trigger condition corresponding to any interaction task plan in the interaction task plan table is achieved, where the third interaction starting instruction includes a target virtual role corresponding to any interaction task plan;
and the interaction task obtaining unit is specifically used for obtaining an interaction task corresponding to the third interaction starting instruction.
Optionally, as shown in fig. 6, the apparatus further includes:
the first schedule creating module 710 is configured to, before monitoring the interactive task schedule, determine at least one interactive task schedule, a target virtual role corresponding to each interactive task schedule, and virtual attribute information corresponding to each target virtual role according to input data for plan creation, and determine the interactive task schedule.
Optionally, as shown in fig. 6, the apparatus further includes:
the second schedule creation module 711 is configured to, before monitoring the interactive task schedule, determine at least one interactive task schedule, a target virtual role corresponding to each interactive task schedule, and virtual attribute information corresponding to each target virtual role according to user attribute information of a target user and a preset interactive task schedule database, and determine the interactive task schedule, where the interactive task schedule is matched with the user attribute information, the target virtual role is matched with task attribute information and user attribute information corresponding to the interactive task schedule, and the task attribute information includes a task type and/or a task execution scenario and/or a task difficulty.
Optionally, the interaction task schedule further comprises a first visualization credential corresponding to each interaction task schedule and/or a second visualization credential corresponding to the interaction task schedule, the first visualization credential and/or the second visualization credential being determined based on the schedule creation input data and/or based on the user attribute information of the target user and/or the task attribute information corresponding to the interaction task schedule;
as shown in fig. 6, the apparatus further includes:
and the credential issuing module 712 is configured to, after determining that the interaction task is completed, issue the first materialized credential and/or the second materialized credential corresponding to the interaction task according to a preset credential issuing rule.
Optionally, the first impersonation credential and/or the second impersonation credential are used to determine achievement information of the target user and/or redeem the virtual resource.
Optionally, as shown in fig. 6, the apparatus further includes:
a fourth interaction instruction receiving module 713, configured to, if a fourth interaction start instruction is received in the process of outputting the target virtual character interaction information, obtain a first priority corresponding to the current interactable information and a second priority corresponding to the fourth interaction start instruction;
and the fourth interaction instruction prompt module 714 is configured to, when the second priority is higher than the first priority, respond to the fourth interaction start instruction and output interaction start prompt information of the target virtual character corresponding to the fourth interaction start instruction.
Optionally, as shown in fig. 6, the apparatus further includes:
the actual behavior acquisition module 715 is configured to acquire actual behavior information of the target user through a preset information acquisition device, where the actual behavior information includes voice information of the target user acquired through a voice acquisition device and/or video information of the target user acquired through a video acquisition device;
a behavior identifier determining module 716, configured to input actual behavior information into a pre-constructed behavior recognition model, so as to obtain a behavior identifier of an actual behavior included in the actual behavior information;
the education information query module 717 is used for querying a preset education resource database to obtain education information corresponding to the behavior identifier;
an education information output module 718 configured to output education information based on a virtual character attribute of a target virtual character currently interacting with the target user or an education character attribute corresponding to an education virtual character corresponding to the education information.
It should be noted that other corresponding descriptions of the functional units related to the virtual character-based interaction apparatus provided in the embodiment of the present application may refer to corresponding descriptions in the methods in fig. 1 to fig. 4, and are not described herein again.
Based on the method shown in fig. 1 to 4, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the virtual character-based interaction method shown in fig. 1 to 4.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1 to 4 and the virtual device embodiment shown in fig. 5 to 6, in order to achieve the above object, the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the computer device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the virtual character-based interaction method as described above with reference to fig. 1 to 4.
Optionally, the computer device may also include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, sensors, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the present embodiment provides a computer device architecture that is not limiting of the computer device, and that may include more or fewer components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. An operating system is a program that manages and maintains the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, outputting corresponding interaction starting prompt information according to the interaction display rule of the target virtual character corresponding to the interaction starting instruction to prompt a target user to establish interactive connection with the target virtual character, and establishes interactive connection between the two after receiving feedback information of the interactive starting instruction, further, obtains interactive information between the target user and the target virtual character, and generating target virtual character interaction information which takes the target virtual character as a carrier and corresponds to the interactive information based on the virtual attribute information of the target virtual character, and outputting the target virtual character interaction information, thereby promoting the target user to interact with the target virtual character. The method and the device for displaying the interactive starting prompt information based on the interactive display rule corresponding to the real role enable the interactive starting prompt information received by the target user to be the same as or similar to the interactive starting prompt information of the real role, output the interactive information of the target virtual role based on the virtual attribute information of the target virtual role, enable the interactive information output by taking the target virtual role as a carrier to be matched with the attribute of the target virtual role, facilitate the target user to accept the virtual role more easily, recognize that the virtual role is a real partner independent of the device, and provide the partner for the target user better by the virtual role.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (20)

1. An interaction method based on virtual roles is used for a client, and is characterized by comprising the following steps:
responding to an interaction starting instruction between a target user and a target virtual role, and outputting interaction starting prompt information corresponding to the target virtual role according to an interaction display rule corresponding to the target virtual role, wherein the target virtual role corresponds to the interaction starting instruction, the interaction starting instruction comprises a third interaction starting instruction, the third interaction starting instruction is generated based on the achievement of a trigger condition corresponding to an interaction task, and the interaction task comprises a task which is accompanied by the target user through the target virtual role;
establishing interactive connection between the target user and the target virtual role based on the received feedback information of the interactive starting instruction;
acquiring interactive information between the target user and the target virtual character, processing the interactive information based on virtual attribute information of the target virtual character to generate target virtual character interactive information, and outputting the target virtual character interactive information by taking the target virtual character as a carrier, wherein when the interaction starting instruction is the third interaction starting instruction, the target virtual character interactive information comprises virtual execution information of executing the interactive task by the target virtual character corresponding to the virtual attribute information and the user attribute information of the target user, the virtual execution information is corrected in real time according to real execution information of the interactive task by the target user, and the real execution information is obtained by analyzing real action information of the target user acquired by a preset information acquisition device, the user attribute information includes, but is not limited to, at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, personality attribute information, skill attribute information, and growth attribute information corresponding to the target user.
2. The method according to claim 1, wherein the virtual attribute information includes at least one of sound attribute information, character attribute information, skill attribute information, and growth attribute information corresponding to the target virtual character.
3. The method according to claim 1, wherein the interaction initiation instruction comprises a first interaction initiation instruction, and before the interaction initiation prompt information corresponding to the target virtual character is output according to the interaction presentation rule corresponding to the target virtual character, the method further comprises: receiving the first interaction starting instruction input by the target user; and/or the presence of a gas in the gas,
before the interactive start instruction includes a second interactive start instruction and the interactive start prompt information corresponding to the target virtual character is output according to the interactive display rule corresponding to the target virtual character, the method further includes: receiving a second interaction starting instruction from a target real role terminal corresponding to the target user, wherein the second interaction starting instruction comprises identification information of the target virtual role; and/or the presence of a gas in the gas,
before outputting the interaction start prompting information corresponding to the target virtual character according to the interaction display rule corresponding to the target virtual character, the method further comprises the following steps: and acquiring a third interaction starting instruction generated after the trigger condition corresponding to the interactive information is achieved.
4. The method according to claim 3, wherein the first interaction initiation instruction is generated by triggering an interaction function corresponding to the target virtual character in a preset contact list, wherein the contact list includes identification information of the target virtual character and identification information of the real character, and the interaction function at least includes a call function and/or a chat function.
5. The method according to claim 3, wherein when the interaction initiation instruction includes the first interaction initiation instruction, the obtaining of interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on virtual attribute information of the target virtual character specifically includes:
acquiring problem information input by the target user, and inquiring whether the problem information is matched with a preset system problem, wherein the interactive information comprises the problem information input by the target user;
if the question information is matched with the preset system question, first reply information corresponding to the question information is obtained, the first reply information is expressed and processed based on the virtual attribute information to generate corresponding target virtual character interaction information, and the target virtual character interaction information is output through the target virtual character, wherein the first reply information is determined based on a preset question and answer database and/or is determined based on a network search result corresponding to the question information and/or is determined based on a sensing result of a sensing device corresponding to the client.
6. The method as claimed in claim 5, wherein after the querying whether the question information matches with a preset-format question, the method further comprises:
and if the question information is not matched with the preset system question, requesting second reply information corresponding to the question information from a target real role terminal corresponding to the target user, expressing the second reply information based on the virtual attribute information to generate corresponding target virtual role interaction information, and outputting the target virtual role interaction information through the target virtual role.
7. The method according to claim 6, wherein the requesting, from the target real character terminal corresponding to the target user, the second reply information corresponding to the question information specifically includes:
sending the question information to the target real role terminal, and receiving the second reply information fed back by the target real role terminal; alternatively, the first and second electrodes may be,
and sending the question information to the preset server, and receiving second reply information fed back from the preset server, wherein the preset server is used for determining and feeding back the second reply information matched with the question information based on a server database, or sending the question information to the target real character terminal and forwarding the second reply information fed back by the target real character terminal.
8. The method according to claim 3, wherein when the interaction initiation instruction includes the second interaction initiation instruction, the obtaining of interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on virtual attribute information of the target virtual character specifically includes:
acquiring information to be forwarded sent by the target real role terminal, wherein the interactive information comprises the information to be forwarded;
and expressing the information to be transferred based on the virtual attribute information of the target virtual character to generate corresponding target virtual character interaction information, and outputting the target virtual character interaction information through the target virtual character.
9. The method according to claim 3, wherein when the interaction initiation instruction includes the third interaction initiation instruction, the obtaining of interactable information between the target user and the target virtual character, and outputting target virtual character interaction information corresponding to the interactable information based on virtual attribute information of the target virtual character specifically includes:
acquiring an interaction task between the target user and the target virtual character, wherein the interactive information comprises the interaction task;
outputting virtual execution information of the target virtual character for executing the interaction task based on the virtual attribute information, acquiring real execution information of the target user on the interaction task, and modifying the virtual execution information in real time according to the real execution information, wherein the target virtual character interaction information comprises the virtual execution information, and the virtual execution information comprises but is not limited to task demonstration of the interaction task by the target virtual character, solution of relevant problems of the interaction task, and execution guidance of the interaction task;
and when the real execution information indicates that the target user completes the interaction task and/or the virtual execution information indicates that the target virtual character completes the interaction task, determining that the interaction task is completed.
10. The method according to claim 9, wherein the obtaining of the actual execution information of the interaction task by the target user specifically includes:
acquiring real action information of the target user through a preset information acquisition device, wherein the real execution information comprises voice information of the target user acquired through a voice acquisition device, and/or video information of the target user acquired through a video acquisition device, and/or fingerprint information of the target user acquired through a biological identification device;
and analyzing the real action information to obtain the real execution information, wherein the real execution information is used for reflecting the execution condition of the target user on the interaction task.
11. The method according to claim 9, wherein the outputting the virtual execution information of the target virtual character on the interaction task based on the virtual attribute information specifically includes:
outputting virtual execution information of the target virtual role on the interaction task based on the virtual attribute information and the user attribute information of the target user;
after the determination that the interaction task is completed, the method further comprises:
and updating the user attribute information of the target user according to the real execution information.
12. The method according to claim 9, wherein before the obtaining of the third interaction initiation instruction generated after the trigger condition corresponding to the interactable information is fulfilled, the method further comprises:
monitoring an interaction task schedule, wherein the interaction task schedule comprises at least one interaction task plan and a target virtual character corresponding to each interaction task plan, and the interaction task plan comprises but is not limited to a learning plan, a sports plan and a living habit development plan;
when a trigger condition corresponding to any interactive task plan in the interactive task plan table is achieved, generating a third interactive starting instruction corresponding to the any interactive task plan, wherein the third interactive starting instruction comprises a target virtual role corresponding to the any interactive task plan;
the acquiring of the interaction task between the target user and the target virtual character specifically includes:
and acquiring the interaction task corresponding to the third interaction starting instruction.
13. The method of claim 12, wherein prior to monitoring the interaction task schedule, the method further comprises:
and according to plan creation input data, determining the at least one interactive task plan, the target virtual role corresponding to each interactive task plan and the virtual attribute information corresponding to each target virtual role, and determining the interactive task plan table.
14. The method of claim 12, wherein prior to monitoring the interaction task schedule, the method further comprises:
and determining the at least one interaction task plan, a target virtual role corresponding to each interaction task plan and virtual attribute information corresponding to each target virtual role according to the user attribute information of the target user and a preset interaction task plan database, and determining the interaction task plan table, wherein the interaction task plan is matched with the user attribute information, the target virtual role is matched with the task attribute information corresponding to the interaction task plan and the user attribute information, and the task attribute information comprises a task type and/or a task execution scene and/or task difficulty.
15. The method of claim 12, wherein the interaction task schedule further comprises a first impersonation credential corresponding to each of the interaction task schedules and/or a second impersonation credential corresponding to the interaction task schedule, the first impersonation credential and/or the second impersonation credential being determined based on schedule creation input data and/or being determined based on user attribute information of the target user and/or task attribute information corresponding to the interaction task schedule;
after the determination that the interaction task is completed, the method further comprises:
and issuing the first objectification certificate and/or the second objectification certificate corresponding to the interaction task according to a preset certificate issuing rule, wherein the first objectification certificate and/or the second objectification certificate are used for determining achievement information of the target user and/or exchanging virtual resources.
16. The method of claim 1, further comprising:
in the process of outputting the target virtual character interaction information, if a fourth interaction starting instruction is received, acquiring a first priority corresponding to the current interactive information and a second priority corresponding to the fourth interaction starting instruction;
and when the second priority is higher than the first priority, responding to the fourth interaction starting instruction, and outputting interaction starting prompt information of the target virtual role corresponding to the fourth interaction starting instruction.
17. The method of claim 1, further comprising:
acquiring actual behavior information of the target user through a preset information acquisition device, wherein the actual behavior information comprises voice information of the target user acquired through a voice acquisition device and/or video information of the target user acquired through a video acquisition device;
inputting the actual behavior information into a pre-constructed behavior recognition model to obtain a behavior identifier of the actual behavior included in the actual behavior information;
inquiring a preset education resource database to obtain education information corresponding to the behavior identification;
outputting the education information based on the virtual character attribute of the target virtual character currently interacted with the target user or an education character attribute corresponding to an education virtual character corresponding to the education information.
18. An interaction device based on virtual roles, which is used for a client and is characterized by comprising:
the starting prompt information output module is used for responding to an interaction starting instruction between a target user and a target virtual role and outputting interaction starting prompt information corresponding to the target virtual role according to an interaction display rule corresponding to the target virtual role, wherein the target virtual role corresponds to the interaction starting instruction, the interaction starting instruction comprises a third interaction starting instruction, the third interaction starting instruction is generated based on the achievement of a trigger condition corresponding to an interaction task, and the interaction task comprises a task which is accompanied by the target user through the target virtual role;
the interaction establishing module is used for establishing the interactive connection between the target user and the target virtual role based on the received feedback information of the interaction starting instruction;
an interaction information output module, configured to obtain information that can be interacted between the target user and the target virtual character, process the information that can be interacted based on virtual attribute information of the target virtual character to generate target virtual character interaction information, and output the target virtual character interaction information with the target virtual character as a carrier, where, when the interaction start instruction is the third interaction start instruction, the target virtual character interaction information includes virtual execution information of the target virtual character executing the interaction task corresponding to the virtual attribute information, the virtual execution information is modified in real time according to real execution information of the interaction task by the target user, and the real execution information is obtained by analyzing real action information of the target user, which is acquired by a preset information acquisition device, the user attribute information includes, but is not limited to, at least one of gender attribute information, age attribute information, grade attribute information, preference attribute information, personality attribute information, skill attribute information, and growth attribute information corresponding to the target user.
19. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the virtual character-based interaction method of any one of claims 1 to 17.
20. A computer device comprising a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the avatar-based interaction method of any of claims 1 to 17 when executing the computer program.
CN202011056375.1A 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment Active CN112199002B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111042523.9A CN113760142A (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment
CN202011056375.1A CN112199002B (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056375.1A CN112199002B (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111042523.9A Division CN113760142A (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112199002A CN112199002A (en) 2021-01-08
CN112199002B true CN112199002B (en) 2021-09-28

Family

ID=74007096

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111042523.9A Pending CN113760142A (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment
CN202011056375.1A Active CN112199002B (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111042523.9A Pending CN113760142A (en) 2020-09-30 2020-09-30 Interaction method and device based on virtual role, storage medium and computer equipment

Country Status (1)

Country Link
CN (2) CN113760142A (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113035315A (en) * 2021-01-22 2021-06-25 浙江工业大学 Children's intelligence rope skipping interactive system based on many terminals
CN113012300A (en) * 2021-04-02 2021-06-22 北京隐虚等贤科技有限公司 Immersive interactive content creation method and device and storage medium
CN113377200B (en) * 2021-06-22 2023-02-24 平安科技(深圳)有限公司 Interactive training method and device based on VR technology and storage medium
CN113362671B (en) * 2021-07-13 2022-07-01 中国人民解放军海军工程大学 Marine nuclear emergency drilling simulation system and drilling method
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior
CN113658213B (en) * 2021-08-16 2023-08-18 百度在线网络技术(北京)有限公司 Image presentation method, related device and computer program product
CN114247141B (en) * 2021-11-09 2023-07-25 腾讯科技(深圳)有限公司 Method, device, equipment, medium and program product for guiding tasks in virtual scene
CN114911381B (en) * 2022-04-15 2023-06-16 青岛海尔科技有限公司 Interactive feedback method and device, storage medium and electronic device
CN116027946B (en) * 2023-03-28 2023-07-18 深圳市人马互动科技有限公司 Picture information processing method and device in interactive novel

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101594577A (en) * 2008-05-30 2009-12-02 吴迪 Phone chat pet, system and implementation method
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN103500244A (en) * 2013-09-06 2014-01-08 雷路德 Virtual friend conversational system and method thereof
CN103869945A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Information interaction method, information interaction device and electronic device
CN105808694A (en) * 2016-03-04 2016-07-27 上海携程商务有限公司 Online customer service response system and method
CN107053191A (en) * 2016-12-31 2017-08-18 华为技术有限公司 A kind of robot, server and man-machine interaction method
CN107621919A (en) * 2017-09-12 2018-01-23 广东小天才科技有限公司 A kind of interactive approach and user terminal for cultivating behavioural habits
CN110392446A (en) * 2019-08-22 2019-10-29 珠海格力电器股份有限公司 A kind of terminal and virtual assistant's server interact method
CN111290568A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Interaction method and device and computer equipment
CN111309886A (en) * 2020-02-18 2020-06-19 腾讯科技(深圳)有限公司 Information interaction method and device and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002225160A1 (en) * 2001-01-22 2002-07-30 Digital Animations Group Plc Interactive virtual assistant
WO2018111886A1 (en) * 2016-12-12 2018-06-21 Blue Goji Llc Targeted neurogenesis stimulated by aerobic exercise with brain function-specific tasks
CN108549481B (en) * 2018-03-29 2021-06-22 东方梦幻虚拟现实科技有限公司 Interaction method and system
CN111176537B (en) * 2019-11-01 2021-03-30 广东小天才科技有限公司 Man-machine interaction method in answering process and sound box
CN111078005B (en) * 2019-11-29 2024-02-20 恒信东方文化股份有限公司 Virtual partner creation method and virtual partner system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101594577A (en) * 2008-05-30 2009-12-02 吴迪 Phone chat pet, system and implementation method
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN103869945A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Information interaction method, information interaction device and electronic device
CN103500244A (en) * 2013-09-06 2014-01-08 雷路德 Virtual friend conversational system and method thereof
CN105808694A (en) * 2016-03-04 2016-07-27 上海携程商务有限公司 Online customer service response system and method
CN107053191A (en) * 2016-12-31 2017-08-18 华为技术有限公司 A kind of robot, server and man-machine interaction method
CN107621919A (en) * 2017-09-12 2018-01-23 广东小天才科技有限公司 A kind of interactive approach and user terminal for cultivating behavioural habits
CN111290568A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Interaction method and device and computer equipment
CN110392446A (en) * 2019-08-22 2019-10-29 珠海格力电器股份有限公司 A kind of terminal and virtual assistant's server interact method
CN111309886A (en) * 2020-02-18 2020-06-19 腾讯科技(深圳)有限公司 Information interaction method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN112199002A (en) 2021-01-08
CN113760142A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN112199002B (en) Interaction method and device based on virtual role, storage medium and computer equipment
JP6816925B2 (en) Data processing method and equipment for childcare robots
US11148296B2 (en) Engaging in human-based social interaction for performing tasks using a persistent companion device
CN109176535B (en) Interaction method and system based on intelligent robot
US11922934B2 (en) Generating response in conversation
CN109637207B (en) Preschool education interactive teaching device and teaching method
US20170221484A1 (en) Electronic personal interactive device
CN109710748B (en) Intelligent robot-oriented picture book reading interaction method and system
JP2019523714A (en) Multi-interaction personality robot
CN106557996A (en) second language teaching system and method
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
CN107038197A (en) The content transmission and interaction of situation and activity-driven
CN111290568A (en) Interaction method and device and computer equipment
KR20180123037A (en) Information processing system, information processing apparatus, information processing method, and recording medium
US11393357B2 (en) Systems and methods to measure and enhance human engagement and cognition
Snodgrass Ethnography of online cultures
US20050288820A1 (en) Novel method to enhance the computer using and online surfing/shopping experience and methods to implement it
CN105388786B (en) A kind of intelligent marionette idol control method
JP2018186326A (en) Robot apparatus and program
CN111949773A (en) Reading equipment, server and data processing method
JP2006109966A (en) Sound game machine and cellular phone
Lim Emotions, behaviour and belief regulation in an intelligent guide with attitude
CN112138410B (en) Interaction method of virtual objects and related device
Strandbech Humanoid robots for health and welfare: on humanoid robots as a welfare technology used in interaction with persons with dementia
Schmitz Tangible interaction with anthropomorphic smart objects in instrumented environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant