CN117742167A - Control method and device of intelligent home system based on virtual image - Google Patents

Control method and device of intelligent home system based on virtual image Download PDF

Info

Publication number
CN117742167A
CN117742167A CN202311799494.XA CN202311799494A CN117742167A CN 117742167 A CN117742167 A CN 117742167A CN 202311799494 A CN202311799494 A CN 202311799494A CN 117742167 A CN117742167 A CN 117742167A
Authority
CN
China
Prior art keywords
information
interaction
target object
intelligent home
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311799494.XA
Other languages
Chinese (zh)
Inventor
马月
唐杰
何文剑
冼海鹰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202311799494.XA priority Critical patent/CN117742167A/en
Publication of CN117742167A publication Critical patent/CN117742167A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a control method and device of an intelligent home system based on an avatar. Wherein the method comprises the following steps: when the generation of the wake-up condition is detected, triggering a target intelligent home device corresponding to the wake-up condition in the intelligent home system to start an interaction mode; acquiring first interaction information generated by a target object based on a trigger wake-up condition; acquiring interaction form information of a virtual object corresponding to target intelligent home equipment; and controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information. The invention solves the technical problems that the interaction between the intelligent home system and the user is single in the related technology, the user can only simply use the intelligent home system, and the personalized requirement of the user is difficult to meet.

Description

Control method and device of intelligent home system based on virtual image
Technical Field
The invention relates to the field of intelligent home system control, in particular to a control method and device of an intelligent home system based on an virtual image.
Background
The intelligent home system is an important component of the Internet of things, and has become a trend of modern families. With the continuous development of technologies such as artificial intelligence, voice recognition, image processing, natural language processing and the like, the intelligent home system can realize more intelligent and humanized control and interaction, so that the use experience of a user is improved. However, conventional smart home systems have certain limitations, such as lack of emotional coupling for interaction between the user and the system, and users often simply use the system and cannot establish emotional interaction. In this case, it is often difficult for the smart home system to meet the personalized needs of the user, and the user experience is also affected.
Aiming at the problems that the interaction between the intelligent home system and the user is single in the related technology, the user can only simply use the intelligent home system and the personalized requirement of the user is difficult to meet, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a control method and a control device of an intelligent home system based on an avatar, which at least solve the technical problems that in the related art, interaction between the intelligent home system and a user is single, the user can only simply use the intelligent home system, and personalized requirements of the user are difficult to meet.
According to an aspect of the embodiment of the present invention, there is provided a control method of an avatar-based smart home system, including: when the generation of a wake-up condition is detected, triggering a target intelligent household device corresponding to the wake-up condition in the intelligent household system to start an interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering a preset intelligent household device in the intelligent household system to enter the interaction mode; acquiring first interaction information generated based on a target object triggering the wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; the method comprises the steps of obtaining interaction form information of a virtual object corresponding to the target intelligent home equipment, wherein the interaction form information at least comprises: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; and controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information.
Optionally, before the interactive form information of the virtual object corresponding to the target smart home device is acquired, the control method of the smart home system based on the avatar further includes: acquiring a virtual form setting instruction sent by the target object, wherein the virtual form setting instruction carries virtual image setting information of each intelligent home device of the target intelligent home device, which is included by the virtual object in the intelligent home system; analyzing the virtual form setting instruction to obtain the virtual image setting information of the virtual object in each intelligent home equipment, wherein the virtual image setting information represents a presentation mode of the virtual object when the virtual object interacts with the target object; generating an intelligent home device-avatar mapping relationship between each intelligent home device and the avatar setting information set for the intelligent home device; and storing the intelligent home equipment-avatar mapping relation.
Optionally, the wake-up condition is generated in the following scenario: generating the wake-up condition when the target object is monitored to leave a control area of the intelligent home system; generating the wake-up condition when the target object is monitored to return to the control area; generating the wake-up condition when a voice control instruction of the target object is received; generating the wake-up condition when a predetermined change of the face of the target object is monitored; and generating the awakening condition when the hand of the target object is monitored to send out a preset action.
Optionally, acquiring the first interaction information generated based on the target object triggering the wake-up condition includes: generating the first interaction information based on a leaving action of the target object when the wake-up condition is generated in a situation that the target object leaves the control area of the smart home system; generating the first interaction information based on a return action of the target object when the wake-up condition is generated in a situation that the target object returns to the control area of the intelligent home system; generating the first interaction information based on the predetermined change when the wake-up condition is generated in a scenario in which the predetermined change occurs in the face of the target object; the first interaction information is generated based on the predetermined action when the wake-up condition is generated in a scenario in which the hand of the target object issues the predetermined action.
Optionally, generating second interaction information based on the first interaction information includes: generating the second interaction information which is opposite to the target object when the first interaction information indicates that the target object leaves the control area; when the first interaction information indicates that the target object returns to the control area, generating second interaction information which meets the target object and returns to the control area; generating the second interactive information for playing audio or video corresponding to the predetermined change when the first interactive information indicates that the predetermined change occurs to the face of the target object; and generating the second interaction information of the voice or the action corresponding to the preset action when the first interaction information indicates that the preset action occurs to the hand of the target object.
Optionally, obtaining interaction form information of the virtual object corresponding to the target smart home device includes: and taking the target intelligent home equipment as an index, and searching the interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation.
Optionally, controlling the virtual object to be presented in the display component of the target smart home device according to the interaction form information includes: determining equipment of the virtual object based on the interactive form information; and controlling the virtual object to be presented in the display component in the facial expression corresponding to the equipment and the second interaction information.
Optionally, the control method of the avatar-based smart home system further includes: and in the process of controlling the virtual object to be presented on the display component in the facial expression corresponding to the equipment and the second interaction information, controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information.
According to another aspect of the embodiment of the present invention, there is also provided a control device for an avatar-based smart home system, including: the intelligent home system comprises a triggering unit, a processing unit and a processing unit, wherein the triggering unit is used for triggering target intelligent home equipment corresponding to a wake-up condition in the intelligent home system to start an interaction mode when the wake-up condition is detected to be generated, the wake-up condition is obtained through machine learning training based on historical interaction data, and the triggering unit is used for triggering preset intelligent home equipment in the intelligent home system to enter the interaction mode; the first acquisition unit is used for acquiring first interaction information generated based on a target object triggering the awakening condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; the second obtaining unit is configured to obtain interaction form information of a virtual object corresponding to the target smart home device, where the interaction form information at least includes: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; the first control unit is used for controlling the virtual object to be presented in the display part of the target intelligent household device according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information.
Optionally, the control device of the avatar-based smart home system further includes: a third obtaining unit, configured to obtain, before obtaining interaction form information of a virtual object corresponding to the target smart home device, a virtual form setting instruction sent by the target object, where the virtual form setting instruction carries virtual image setting information of each smart home device of the target smart home device that the virtual object includes in the smart home system; the analysis unit is used for analyzing the virtual form setting instruction to obtain the virtual image setting information of the virtual object in each intelligent household device, wherein the virtual image setting information represents the presentation mode of the virtual object when the virtual object interacts with the target object; a first generation unit for generating an intelligent home device-avatar mapping relationship between each of the intelligent home devices and the avatar setting information set for the intelligent home device; and the storage unit is used for storing the intelligent home equipment-virtual image mapping relation.
Optionally, the wake-up condition is generated in the following scenario: the second generation unit is used for generating the awakening condition when the target object is detected to leave the control area of the intelligent home system; the third generating unit is used for generating the awakening condition when the target object is monitored to return to the control area; a fourth generating unit, configured to generate the wake-up condition when receiving a voice control instruction of the target object; a fifth generating unit, configured to generate the wake-up condition when a predetermined change of the face of the target object is detected; and the sixth generation unit is used for generating the awakening condition when the hand of the target object is monitored to send out a preset action.
Optionally, the first acquisition unit includes: the first generation module is used for generating the first interaction information based on the leaving action of the target object when the wake-up condition is generated under the condition that the target object leaves the control area of the intelligent home system; the second generation module is used for generating the first interaction information based on the return action of the target object when the wake-up condition is generated under the condition that the target object returns to the control area of the intelligent home system; a third generation module configured to generate the first interaction information based on the predetermined change when the wake-up condition is generated in a scenario in which the predetermined change occurs in the face of the target object; and the fourth generation module is used for generating the first interaction information based on the preset action when the wake-up condition is generated under the condition that the hand of the target object emits the preset action.
Optionally, the first control unit includes: a fifth generating module, configured to generate the second interaction information that is opposite to the target object when the first interaction information indicates that the target object leaves the control area; the sixth generation module is used for generating the second interaction information which meets the target object and returns to the control area when the first interaction information indicates that the target object returns to the control area; a seventh generation module configured to generate, when the first interaction information indicates that the predetermined change occurs in the face of the target object, the second interaction information for playing audio or video corresponding to the predetermined change; and an eighth generation module, configured to generate, when the first interaction information indicates that the predetermined action occurs on the hand of the target object, the second interaction information that corresponds to the voice or the action of the predetermined action.
Optionally, the second obtaining unit includes: and the searching module is used for searching the interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation by taking the target intelligent home equipment as an index.
Optionally, the first control unit includes: a determining module for determining equipment of the virtual object based on the interactive form information; and the control module is used for controlling the virtual object to be presented in the display component in the facial expression corresponding to the equipment and the second interaction information.
Optionally, the control device of the avatar-based smart home system further includes: and the second control unit is used for controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information in the process of controlling the virtual object to be presented on the display component in the facial expression corresponding to the equipment and the second interaction information.
According to another aspect of the embodiments of the present invention, there is also provided a control system of an avatar-based smart home system using any one of the above-described control methods of an avatar-based smart home system.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program performs any one of the above-described avatar-based smart home system control methods.
According to another aspect of the embodiment of the present invention, there is further provided a processor, configured to execute a program, where the program executes any one of the above-mentioned control methods of the avatar-based smart home system.
In the embodiment of the invention, when the generation of a wake-up condition is detected, triggering a target intelligent home device corresponding to the wake-up condition in the intelligent home system to start an interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering a preset intelligent home device in the intelligent home system to enter the interaction mode; acquiring first interaction information generated by a target object based on a trigger wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; the method comprises the steps of obtaining interaction form information of a virtual object corresponding to target intelligent home equipment, wherein the interaction form information at least comprises the following steps: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; and controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information. Through the technical scheme, the purpose that the virtual person corresponding to the intelligent home equipment is awakened according to the voice command sent by the user or the current state of the voice command is achieved, the virtual person interacts with the user through the image and tone customized by the user in a personalized manner is achieved, the technical effect that the intelligent home system interacts with the user in a diversified manner is achieved, personalized requirements of the user are met, the technical problem that interaction between the intelligent home system and the user in the related technology is single, the user can only simply use the intelligent home system, and personalized requirements of the user are difficult to meet is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a hardware block diagram of a mobile terminal of a control method of an avatar-based smart home system according to an embodiment of the present invention;
fig. 2 is a flowchart of a control method of an avatar-based smart home system according to an embodiment of the present invention;
fig. 3 is a flowchart of an alternative avatar-based smart home system control method according to an embodiment of the present invention;
fig. 4 is a schematic view of a control device of an avatar-based smart home system according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As described in the background art, in the related art, the interaction between the smart home system and the user is single, and the user can only simply use the smart home system, so that the personalized requirement of the user is difficult to meet. In view of the above drawbacks, the embodiments of the present invention provide a method and apparatus for controlling an avatar-based smart home system.
Therefore, a virtual technology is provided, and communication is carried out through various senses such as voice, vision, hearing and the like, so that a family brain image, a person setting, a character, emotion and the like are enabled, and interaction with a user can be provided with a solution of a certain emotion connection. The scheme can improve the user experience of the intelligent home system, enhance the affective interaction between the user and the system, and better meet the requirements of the user. With the pursuit of people for life quality and the continuous development of science and technology, the popularity of the intelligent home system will be higher and higher. The virtual man technology provided by the patent not only can improve the user experience of the intelligent home system, but also can play a wider role in future social life, for example, in the fields of medical treatment, education, entertainment and the like, and can utilize the virtual man technology to perform more intelligent and humanized interaction.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The method embodiments provided in the embodiments of the present invention may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the operation on the mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of a control method of an intelligent home system based on an avatar according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store computer programs, such as software programs and modules of application software, such as computer programs corresponding to the control method of the avatar-based smart home system in the embodiment of the present invention, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, i.e., implement the above-mentioned methods. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network I nterface Contro l ler, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
According to an embodiment of the present invention, there is provided a method embodiment of a control method of an avatar-based smart home system, it should be noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that herein.
Fig. 2 is a flowchart of a control method of an avatar-based smart home system according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
step S202, when the generation of a wake-up condition is detected, triggering a target intelligent home device corresponding to the wake-up condition in the intelligent home system to start an interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering a preset intelligent home device in the intelligent home system to enter the interaction mode.
Optionally, the wake-up condition is used for triggering the corresponding smart home device in the smart home system to enter the interaction mode and executing the corresponding action.
The above-described embodiments of the present invention will be described in detail with reference to fig. 3, and fig. 3 is a flowchart of an alternative avatar-based smart home system control method according to an embodiment of the present invention. As shown in fig. 3, a user can use the intelligent home APP to customize the characteristics of the virtual human image, the human setting, the character, the emotion and the like according to the preference and the requirement of the user, so as to meet the personalized requirement; for example, the user can design an intelligent home virtual human figure named 'small A' based on the sound and the face of young boys, and then perform skin changing, sound customization, intonation and other operations on the intelligent home virtual human figure, and can control the intelligent home virtual human figure to execute corresponding actions by sending out voice instructions.
According to the above embodiment of the present invention, the wake-up condition may be generated in the following scenario: generating a wake-up condition when the target object is monitored to leave a control area of the intelligent home system; generating a wake-up condition when the target object is monitored to return to the control area; generating a wake-up condition when a voice control instruction of a target object is received; generating a wake-up condition when a predetermined change of the face of the target object is monitored; and generating a wake-up condition when the hand of the target object is monitored to send out a preset action.
It should be noted that the above-mentioned predetermined change and predetermined action may be regarded as a change in emotion of the user, and the specific change is not limited thereto.
For example, when it is monitored that the user leaves the control area of the smart home system, a wake-up condition may be generated to control the corresponding smart device to perform an action, such as controlling the smart light bulb, the air conditioner, etc. to be turned off, so as to avoid wasting resources; when the user is monitored to enter or return to the control area of the intelligent home system, the corresponding intelligent equipment is controlled to execute actions, such as triggering the starting of an intelligent bulb, an air conditioner and the like, so that comfortable experience is provided for the user; when the facial expression of the user is monitored to change, if the facial expression of the user is tired, a wake-up condition can be generated to control the Bluetooth sound box to play the soothing music, control the light of the intelligent bulb to be darkened, and the like, so that a resting atmosphere is created for the user; when the hand of the user is monitored to send out the indication action, if the user wants to put a cup or take a snack placed at a distance, a wake-up condition can be generated to control the intelligent robot to help the user to complete the corresponding action, and convenience service is provided for the user.
Step S204, first interaction information generated based on the target object triggering the wake-up condition is obtained, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object.
Optionally, the first interaction information is information generated based on a voice control instruction, facial expression change, hand motion or the like sent by the user.
According to the above embodiment of the present invention, in the step S204, acquiring the first interaction information generated based on the target object triggering the wake-up condition includes: generating first interaction information based on a leaving action of the target object when the wake-up condition is generated in a situation that the target object leaves a control area of the intelligent home system; generating first interaction information based on a return action of the target object when the wake-up condition is generated under the condition that the target object returns to a control area of the intelligent home system; generating first interaction information based on a predetermined change when a wake-up condition is generated in a scene where the face of the target object is changed by the predetermined change; when the wake-up condition is generated in a situation that a hand of the target object gives out a preset action, first interaction information is generated based on the preset action.
For example, when the user leaves or returns to the control area of the smart home system, the facial expression of the user changes (such as tired expression), and the hand of the user gives an indication action (such as wanting to take a remotely located snack), interaction information is generated based on the change of the user or the given instruction to instruct the corresponding smart device to perform the action.
Step S206, obtaining interaction form information of a virtual object corresponding to the target intelligent home equipment, wherein the interaction form information at least comprises: sound characteristic information of the virtual object, face information of the virtual object, and equipment information of the virtual object.
Optionally, the interactive form information is used for displaying the image of the virtual person interacting with the user. According to the above embodiment of the present invention, before the step S206, that is, before the interactive form information of the virtual object corresponding to the target smart home device is acquired, the control method of the smart home system based on the avatar further includes: acquiring an virtual form setting instruction sent by a target object, wherein the virtual form setting instruction carries virtual image setting information of all intelligent home devices of which the virtual object comprises target intelligent home devices in an intelligent home system; analyzing the virtual form setting instruction to obtain virtual image setting information of the virtual object in each intelligent household device, wherein the virtual image setting information represents a presentation mode of the virtual object when the virtual object interacts with the target object; generating an intelligent home device-avatar mapping relationship between each intelligent home device and the avatar setting information set for the intelligent home device; and storing the mapping relation between the intelligent home equipment and the virtual image.
As shown in fig. 3, the voice recognition technology can be utilized to perform voice setting on a mobile phone, an air conditioner and a sound box, and perform image processing of 'small a' on an intelligent central control screen and a refrigerator screen so as to realize interaction between a user and a virtual person, wherein the user can communicate with the virtual person through various senses such as voice, vision, hearing and the like.
In addition, the emotion connection with the user can be realized through the characteristics of the image, the human setting, the character, the emotion and the like of the small A and the multi-sense interaction mode; the virtual person may also exhibit different emotional states (e.g., happy, angry, sad, etc.) for emotional interactions with the user.
According to the above embodiment of the present invention, in the step S206, the obtaining the interaction form information of the virtual object corresponding to the target smart home device includes: and searching interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation by taking the target intelligent home equipment as an index.
The mapping relation between each intelligent home device and the corresponding virtual image can be stored in advance, and then when the interaction form information of the virtual image corresponding to a certain intelligent home device is obtained, the intelligent home device can be used as an index, and the interaction form information corresponding to the intelligent home device can be searched in the stored mapping relation of the intelligent home device and the virtual image.
Step S208, the virtual object is controlled to be presented in the display component of the target intelligent home equipment according to the interaction form information, second interaction information is generated based on the first interaction information, and the second interaction information is sent to the virtual object, so that the virtual object interacts with the target object based on the second interaction information.
Alternatively, the display component may include, but is not limited to: electronic screens, display screens, etc. may be used for the components of the display.
According to the above embodiment of the present invention, in the step S208, controlling the virtual object to be presented in the display part of the target smart home device according to the interactive form information includes: determining equipment of the virtual object based on the interactive form information; the virtual object is controlled to be presented in the display section with the facial expression corresponding to the equipment and the second interaction information.
For example, equipment information of the virtual person may be determined according to the acquired interactive form information of the virtual object corresponding to the smart home device to be controlled to perform the action, and then the virtual person is controlled to be presented in a display screen of the smart home device with the equipment information and the corresponding facial expression.
According to the above embodiment of the present invention, in the above step S208, generating the second interaction information based on the first interaction information includes: generating second interaction information which is opposite to the target object when the first interaction information indicates that the target object leaves the control area; generating second interaction information for catering to the target object return control area when the first interaction information indicates that the target object returns to the control area; generating second interactive information for playing audio or video corresponding to a predetermined change when the first interactive information indicates that the face of the target object is changed in a predetermined manner; when the first interaction information indicates that a predetermined action occurs on the hand of the target object, second interaction information of voice or action corresponding to the predetermined action is generated.
For example, when the user leaves the control area of the intelligent system, generating second interaction information to control the virtual person corresponding to the corresponding intelligent device to go out with the user based on the second interaction information; when a user enters or returns to a control area of the intelligent system, generating second interaction information to control a virtual person corresponding to the corresponding intelligent equipment to meet the user based on the second interaction information; when the facial expression of the user changes (such as the tired expression of the face of the user), generating second interaction information so as to control the virtual person corresponding to the corresponding intelligent device to play the eased music or the sleep-aiding story video and the like for the user based on the second interaction information; when the hands of the user send out indication actions (such as snacks which are needed to be taken at a distance by lifting the hands), second interaction information is generated so as to control the virtual person corresponding to the corresponding intelligent device to help the user to complete the corresponding actions based on the second interaction information, so that convenience service is provided for the user, and user experience is improved.
According to the above embodiment of the present invention, in the above step S208, the control method of the avatar-based smart home system may further include: and in the process of controlling the virtual object to be presented on the display part in the way of equipping and facial expressions corresponding to the second interaction information, controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information.
The virtual person may be controlled to interact with the user in a tone, intonation, or the like corresponding to the second interaction information.
According to the method, when the generation of the wake-up condition is detected, the target intelligent home equipment corresponding to the wake-up condition in the intelligent home system is triggered to start the interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering the preset intelligent home equipment in the intelligent home system to enter the interaction mode; acquiring first interaction information generated by a target object based on a trigger wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; the method comprises the steps of obtaining interaction form information of a virtual object corresponding to target intelligent home equipment, wherein the interaction form information at least comprises the following steps: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; the virtual object is controlled to be presented in the display part of the target intelligent home equipment according to the interaction form information, the second interaction information is generated based on the first interaction information, and the second interaction information is sent to the virtual object, so that the virtual object interacts with the target object based on the second interaction information, the purpose that a virtual person corresponding to the intelligent home equipment is awakened according to a voice instruction sent by a user or the current state of the virtual person, and the virtual person interacts with the user in a personalized customized image and tone of the user is achieved, the technical effect that the intelligent home system interacts with the user in a diversified mode is achieved, and the personalized requirements of the user are met.
Therefore, according to the technical scheme provided by the embodiment of the invention, the technical problem that the interaction between the intelligent home system and the user is single in the related art, the user can only simply use the intelligent home system, and the personalized requirement of the user is difficult to meet is solved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
According to an embodiment of the present invention, there is also provided a control device of an avatar-based smart home system for implementing the control method of an avatar-based smart home system, and fig. 4 is a schematic diagram of the control device of an avatar-based smart home system according to an embodiment of the present invention, as shown in fig. 4, the device including: a triggering unit 41, a first acquisition unit 43, a second acquisition unit 45 and a first control unit 47. The control device of the avatar-based smart home system will be described in detail.
The triggering unit 41 is configured to trigger, when detecting that a wake-up condition is generated, a target smart home device in the smart home system corresponding to the wake-up condition to start an interaction mode, where the wake-up condition is obtained through machine learning training based on historical interaction data, and is configured to trigger a predetermined smart home device in the smart home system to enter the interaction mode.
The first obtaining unit 43 is configured to obtain first interaction information generated based on the target object triggering the wake-up condition, where the first interaction information is information generated based on voice information sent by the target object or current state information of the target object.
The second obtaining unit 45 is configured to obtain interaction form information of a virtual object corresponding to the target smart home device, where the interaction form information at least includes: sound characteristic information of the virtual object, face information of the virtual object, and equipment information of the virtual object.
The first control unit 47 is configured to control the virtual object to be presented in the display component of the target smart home device according to the interaction form information, generate second interaction information based on the first interaction information, and send the second interaction information to the virtual object, so that the virtual object interacts with the target object based on the second interaction information.
Here, the triggering unit 41, the first acquiring unit 43, the second acquiring unit 45, and the first control unit 47 correspond to steps S202 to S208 in the above embodiment, and the four units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiment.
As can be seen from the above, in the solution described in the foregoing embodiment of the present invention, when the triggering unit detects that a wake-up condition is generated, the triggering unit may trigger a target smart home device in the smart home system, which corresponds to the wake-up condition, to start an interaction mode, where the wake-up condition is a condition that a predetermined smart home device in the smart home system enters the interaction mode based on historical interaction data obtained through machine learning training; then, a first acquisition unit is utilized to acquire first interaction information generated by a target object based on a trigger wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; and then, acquiring interaction form information of the virtual object corresponding to the target intelligent home equipment by using a second acquisition unit, wherein the interaction form information at least comprises the following steps: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; finally, the first control unit is used for controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information, thereby achieving the purposes of waking up the virtual person corresponding to the intelligent home equipment according to the voice instruction sent by the user or the current state of the user, enabling the virtual person to interact with the user in the customized image and tone of the user, realizing the technical effect of carrying out diversified interaction on the intelligent home system and the user, and meeting the personalized requirements of the user.
Therefore, according to the technical scheme provided by the embodiment of the invention, the technical problem that the interaction between the intelligent home system and the user is single in the related art, the user can only simply use the intelligent home system, and the personalized requirement of the user is difficult to meet is solved.
Optionally, the control device of the avatar-based smart home system further includes: the third obtaining unit is used for obtaining a virtual form setting instruction sent by the target object before obtaining the interaction form information of the virtual object corresponding to the target intelligent home equipment, wherein the virtual form setting instruction carries virtual image setting information of all intelligent home equipment of which the virtual object comprises the target intelligent home equipment in the intelligent home system; the analysis unit is used for analyzing the virtual form setting instruction to obtain virtual image setting information of the virtual object in each intelligent household device, wherein the virtual image setting information represents a presentation mode when the virtual object interacts with the target object; the first generation unit is used for generating an intelligent home equipment-virtual image mapping relation between each intelligent home equipment and virtual image setting information set for the intelligent home equipment; and the storage unit is used for storing the mapping relation between the intelligent household equipment and the virtual image.
Optionally, the wake-up condition is generated in the following scenario: the second generation unit is used for generating a wake-up condition when the target object is monitored to leave the control area of the intelligent home system; the third generation unit is used for generating a wake-up condition when the target object is monitored to return to the control area; a fourth generating unit, configured to generate a wake-up condition when receiving a voice control instruction of a target object; a fifth generation unit configured to generate a wake-up condition when a predetermined change in the face of the target object is detected; and the sixth generation unit is used for generating a wake-up condition when the hand of the target object is monitored to send out a preset action.
Optionally, the first acquisition unit includes: the first generation module is used for generating first interaction information based on the leaving action of the target object when the wake-up condition is generated under the condition that the target object leaves a control area of the intelligent home system; the second generation module is used for generating first interaction information based on the return action of the target object when the wake-up condition is generated under the condition that the target object returns to the control area of the intelligent home system; a third generation module for generating first interaction information based on a predetermined change when a wake-up condition is generated in a scene where the face of the target object is changed by the predetermined change; and the fourth generation module is used for generating the first interaction information based on the preset action when the wake-up condition is generated under the condition that the hand of the target object emits the preset action.
Optionally, the first control unit includes: a fifth generation module, configured to generate second interaction information that is opposite to the target object when the first interaction information indicates that the target object leaves the control area; the sixth generation module is used for generating second interaction information for meeting the target object to return to the control area when the first interaction information indicates that the target object returns to the control area; a seventh generation module, configured to generate second interaction information for playing audio or video corresponding to a predetermined change when the first interaction information indicates that the face of the target object has the predetermined change; and the eighth generation module is used for generating second interaction information of voice or action corresponding to the preset action when the first interaction information indicates that the preset action occurs on the hand of the target object.
Optionally, the second acquisition unit includes: and the searching module is used for searching interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation by taking the target intelligent home equipment as an index.
Optionally, the first control unit includes: a determining module for determining equipment of the virtual object based on the interactive form information; and the control module is used for controlling the virtual object to be presented in the display part by equipping the facial expression corresponding to the second interaction information.
Optionally, the control device of the avatar-based smart home system further includes: and the second control unit is used for controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information in the process of controlling the virtual object to be presented on the display component according to the equipment and the facial expression corresponding to the second interaction information.
According to another aspect of the embodiments of the present invention, there is also provided a control system of an avatar-based smart home system, which uses any one of the above-described control methods of an avatar-based smart home system.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program performs any one of the above-described avatar-based smart home system control methods.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of communication devices.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: when the generation of a wake-up condition is detected, triggering a target intelligent home device corresponding to the wake-up condition in the intelligent home system to start an interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering a preset intelligent home device in the intelligent home system to enter the interaction mode; acquiring first interaction information generated by a target object based on a trigger wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object; the method comprises the steps of obtaining interaction form information of a virtual object corresponding to target intelligent home equipment, wherein the interaction form information at least comprises the following steps: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object; and controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring an virtual form setting instruction sent by a target object, wherein the virtual form setting instruction carries virtual image setting information of all intelligent home devices of which the virtual object comprises target intelligent home devices in an intelligent home system; analyzing the virtual form setting instruction to obtain virtual image setting information of the virtual object in each intelligent household device, wherein the virtual image setting information represents a presentation mode of the virtual object when the virtual object interacts with the target object; generating an intelligent home device-avatar mapping relationship between each intelligent home device and the avatar setting information set for the intelligent home device; and storing the mapping relation between the intelligent home equipment and the virtual image.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: generating a wake-up condition when the target object is monitored to leave a control area of the intelligent home system; generating a wake-up condition when the target object is monitored to return to the control area; generating a wake-up condition when a voice control instruction of a target object is received; generating a wake-up condition when a predetermined change of the face of the target object is monitored; and generating a wake-up condition when the hand of the target object is monitored to send out a preset action.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: generating first interaction information based on a leaving action of the target object when the wake-up condition is generated in a situation that the target object leaves a control area of the intelligent home system; generating first interaction information based on a return action of the target object when the wake-up condition is generated under the condition that the target object returns to a control area of the intelligent home system; generating first interaction information based on a predetermined change when a wake-up condition is generated in a scene where the face of the target object is changed by the predetermined change; when the wake-up condition is generated in a situation that a hand of the target object gives out a preset action, first interaction information is generated based on the preset action.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: generating second interaction information which is opposite to the target object when the first interaction information indicates that the target object leaves the control area; generating second interaction information for catering to the target object return control area when the first interaction information indicates that the target object returns to the control area; generating second interactive information for playing audio or video corresponding to a predetermined change when the first interactive information indicates that the face of the target object is changed in a predetermined manner; when the first interaction information indicates that a predetermined action occurs on the hand of the target object, second interaction information of voice or action corresponding to the predetermined action is generated.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: and searching interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation by taking the target intelligent home equipment as an index.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: determining equipment of the virtual object based on the interactive form information; the virtual object is controlled to be presented in the display section with the facial expression corresponding to the equipment and the second interaction information.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: and in the process of controlling the virtual object to be presented on the display part in the way of equipping and facial expressions corresponding to the second interaction information, controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information.
According to another aspect of the embodiment of the present invention, there is also provided a processor for running a program, wherein the program runs to execute any one of the above-mentioned control methods of the avatar-based smart home system.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A control method of an intelligent home system based on an avatar, comprising:
when the generation of a wake-up condition is detected, triggering a target intelligent household device corresponding to the wake-up condition in the intelligent household system to start an interaction mode, wherein the wake-up condition is obtained through machine learning training based on historical interaction data and is used for triggering a preset intelligent household device in the intelligent household system to enter the interaction mode;
acquiring first interaction information generated based on a target object triggering the wake-up condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object;
the method comprises the steps of obtaining interaction form information of a virtual object corresponding to the target intelligent home equipment, wherein the interaction form information at least comprises: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object;
And controlling the virtual object to be presented in the display part of the target intelligent home equipment according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information.
2. The method for controlling an avatar-based smart home system of claim 1, further comprising, before acquiring the interactive form information of the virtual object corresponding to the target smart home device:
acquiring a virtual form setting instruction sent by the target object, wherein the virtual form setting instruction carries virtual image setting information of each intelligent home device of the target intelligent home device, which is included by the virtual object in the intelligent home system;
analyzing the virtual form setting instruction to obtain the virtual image setting information of the virtual object in each intelligent home equipment, wherein the virtual image setting information represents a presentation mode of the virtual object when the virtual object interacts with the target object;
Generating an intelligent home device-avatar mapping relationship between each intelligent home device and the avatar setting information set for the intelligent home device;
and storing the intelligent home equipment-avatar mapping relation.
3. The method for controlling an avatar-based smart home system of claim 1, wherein the wake-up condition is generated in the following scenario:
generating the wake-up condition when the target object is monitored to leave a control area of the intelligent home system;
generating the wake-up condition when the target object is monitored to return to the control area;
generating the wake-up condition when a voice control instruction of the target object is received;
generating the wake-up condition when a predetermined change of the face of the target object is monitored;
and generating the awakening condition when the hand of the target object is monitored to send out a preset action.
4. The avatar-based smart home system control method of claim 3, wherein acquiring the first interaction information generated based on the target object triggering the wake-up condition comprises:
generating the first interaction information based on a leaving action of the target object when the wake-up condition is generated in a situation that the target object leaves the control area of the smart home system;
Generating the first interaction information based on a return action of the target object when the wake-up condition is generated in a situation that the target object returns to the control area of the intelligent home system;
generating the first interaction information based on the predetermined change when the wake-up condition is generated in a scenario in which the predetermined change occurs in the face of the target object;
the first interaction information is generated based on the predetermined action when the wake-up condition is generated in a scenario in which the hand of the target object issues the predetermined action.
5. The method of controlling an avatar-based smart home system of claim 4, wherein generating second interactive information based on the first interactive information comprises:
generating the second interaction information which is opposite to the target object when the first interaction information indicates that the target object leaves the control area;
when the first interaction information indicates that the target object returns to the control area, generating second interaction information which meets the target object and returns to the control area;
generating the second interactive information for playing audio or video corresponding to the predetermined change when the first interactive information indicates that the predetermined change occurs to the face of the target object;
And generating the second interaction information of the voice or the action corresponding to the preset action when the first interaction information indicates that the preset action occurs to the hand of the target object.
6. The method for controlling an avatar-based smart home system of claim 2, wherein the acquiring of interactive form information of the virtual object corresponding to the target smart home device comprises:
and taking the target intelligent home equipment as an index, and searching the interaction form information corresponding to the target intelligent home equipment in the intelligent home equipment-virtual image mapping relation.
7. The method for controlling an avatar-based smart home system as claimed in claim 1, wherein controlling the virtual object to be presented in the display part of the target smart home device according to the interactive form information comprises:
determining equipment of the virtual object based on the interactive form information;
and controlling the virtual object to be presented in the display component in the facial expression corresponding to the equipment and the second interaction information.
8. The avatar-based smart home system control method of claim 7, further comprising:
And in the process of controlling the virtual object to be presented on the display component in the facial expression corresponding to the equipment and the second interaction information, controlling the virtual object to interact with the target object according to the sound characteristic information corresponding to the second interaction information.
9. A control device of an avatar-based smart home system, comprising:
the intelligent home system comprises a triggering unit, a processing unit and a processing unit, wherein the triggering unit is used for triggering target intelligent home equipment corresponding to a wake-up condition in the intelligent home system to start an interaction mode when the wake-up condition is detected to be generated, the wake-up condition is obtained through machine learning training based on historical interaction data, and the triggering unit is used for triggering preset intelligent home equipment in the intelligent home system to enter the interaction mode;
the first acquisition unit is used for acquiring first interaction information generated based on a target object triggering the awakening condition, wherein the first interaction information is information generated based on voice information sent by the target object or current state information of the target object;
the second obtaining unit is configured to obtain interaction form information of a virtual object corresponding to the target smart home device, where the interaction form information at least includes: sound characteristic information of the virtual object, face information of the virtual object, equipment information of the virtual object;
The first control unit is used for controlling the virtual object to be presented in the display part of the target intelligent household device according to the interaction form information, generating second interaction information based on the first interaction information, and sending the second interaction information to the virtual object so that the virtual object interacts with the target object based on the second interaction information.
10. A processor, wherein the processor is configured to run a program, and wherein the program runs to perform the method for controlling the avatar-based smart home system of any one of claims 1 to 8.
CN202311799494.XA 2023-12-25 2023-12-25 Control method and device of intelligent home system based on virtual image Pending CN117742167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311799494.XA CN117742167A (en) 2023-12-25 2023-12-25 Control method and device of intelligent home system based on virtual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311799494.XA CN117742167A (en) 2023-12-25 2023-12-25 Control method and device of intelligent home system based on virtual image

Publications (1)

Publication Number Publication Date
CN117742167A true CN117742167A (en) 2024-03-22

Family

ID=90279394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311799494.XA Pending CN117742167A (en) 2023-12-25 2023-12-25 Control method and device of intelligent home system based on virtual image

Country Status (1)

Country Link
CN (1) CN117742167A (en)

Similar Documents

Publication Publication Date Title
US11327556B2 (en) Information processing system, client terminal, information processing method, and recording medium
CN107894833B (en) Multi-modal interaction processing method and system based on virtual human
CN111788621B (en) Personal virtual digital assistant
US11271765B2 (en) Device and method for adaptively providing meeting
KR102558437B1 (en) Method For Processing of Question and answer and electronic device supporting the same
US11922934B2 (en) Generating response in conversation
CN108886532A (en) Device and method for operating personal agent
CN107632706B (en) Application data processing method and system of multi-modal virtual human
CN107340865A (en) Multi-modal virtual robot exchange method and system
CN105141587B (en) A kind of virtual puppet interactive approach and device
CN107704169B (en) Virtual human state management method and system
US11610092B2 (en) Information processing system, information processing apparatus, information processing method, and recording medium
CN109327737A (en) TV programme suggesting method, terminal, system and storage medium
WO2019133689A1 (en) System and method for selective animatronic peripheral response for human machine dialogue
US20150004576A1 (en) Apparatus and method for personalized sensory media play based on the inferred relationship between sensory effects and user's emotional responses
US20190248001A1 (en) Conversation output system, conversation output method, and non-transitory recording medium
EP3756188A1 (en) System and method for dynamic robot configuration for enhanced digital experiences
WO2019133680A1 (en) System and method for detecting physical proximity between devices
CN109377979B (en) Method and system for updating welcome language
CN112558911B (en) Voice interaction method and device for massage chair
WO2019160612A1 (en) System and method for dynamic robot profile configurations based on user interactions
CN112596405A (en) Control method, device and equipment of household appliance and computer readable storage medium
WO2019160613A1 (en) System and method for dynamic program configuration
CN115146048A (en) Multi-NPC dialogue text generation and display method, equipment and medium
CN110154048A (en) Control method, control device and the robot of robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination