CN108733209A - Man-machine interaction method, device, robot and storage medium - Google Patents

Man-machine interaction method, device, robot and storage medium Download PDF

Info

Publication number
CN108733209A
CN108733209A CN201810237056.7A CN201810237056A CN108733209A CN 108733209 A CN108733209 A CN 108733209A CN 201810237056 A CN201810237056 A CN 201810237056A CN 108733209 A CN108733209 A CN 108733209A
Authority
CN
China
Prior art keywords
target user
mood classification
classification
facial image
mood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810237056.7A
Other languages
Chinese (zh)
Inventor
周子傲
王雪松
马健
高宝岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN201810237056.7A priority Critical patent/CN108733209A/en
Publication of CN108733209A publication Critical patent/CN108733209A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

A kind of man-machine interaction method of present invention offer, device, robot and storage medium.This method includes:Detect the facial image of target user;The mood classification of the target user is identified according to the facial image detected;According to the mood classification of the target user, interactive operation corresponding with the mood classification is executed to the target user.The embodiment of the present invention is according to different mood classifications, make corresponding interactive operation, to reach consistent with target user on emotional expression, and interactive operation of the robot actively according to the mood classification feedback identified is more diversified, so that robot more personalizes, is more intimate, interactive experience is more preferable.

Description

Man-machine interaction method, device, robot and storage medium
Technical field
The present invention relates to field of artificial intelligence more particularly to a kind of man-machine interaction method, device, robot and storages Medium.
Background technology
With the development of science and technology, the application of intelligent robot is more and more extensive.Such as in medical treatment, health care, family, joy The fields such as happy and service industry, which have, to be widely applied.With the development of speech recognition technology, the function of intelligent robot Also stronger and stronger, communication can be carried out with user.
Good human-computer interaction is to evaluate the key factor of intelligent robot quality.Intelligent robot is interacted with user When, it is generally interacted by passive form, and interactive mode is more inflexible, user experience is poor.
Invention content
A kind of man-machine interaction method of present invention offer, device, robot and storage medium, to improve use when human-computer interaction It experiences at family.
In a first aspect, the present invention provides a kind of man-machine interaction method, including:
Detect the facial image of target user;
The mood classification of the target user is identified according to the facial image detected;
According to the mood classification of the target user, interaction corresponding with the mood classification is executed to the target user Operation.
Optionally, the facial image that the basis detects identifies the mood classification of the target user, including:
According to the facial image, using deep learning algorithm model to the feelings of the corresponding target user of the facial image Thread classification is identified.
Optionally, the deep learning algorithm model generates in the following way:
Several sample facial images are obtained, the sample facial image includes the markup information of the mood classification;
Deep learning algorithm model is trained according to the sample facial image.
Optionally, the mood classification includes one or more of:Tranquil mood classification, angry mood classification, sadness Mood classification and happy emoticon classification.
Optionally, the mood classification according to the target user executes and the mood class to the target user Not corresponding interactive operation, including:
According to the mood classification of the target user, exports corresponding voice to the target user and indicate information.
Optionally, if the mood classification is tranquil mood classification, the voice indicates information, for inquiring the mesh Mark the demand of user;
If the mood classification is angry mood classification, the voice indicates information, for prompting the target user Whether music is played;
If the mood classification is sad mood classification, the voice indicates information, for prompting the target user Whether sadness is needed to pacify;
If the mood classification is happy emoticon classification, the voice indicates information, for prompting the target user Whether take pictures.
Optionally, after the corresponding voice instruction information to target user output, further include:
Receive the response message that information is indicated for the voice that the target user inputs robot;
If the response message is to indicate the voice confirmation message of information, executes and indicate information with the voice Corresponding operation;
If the response message is refusal instruction information, response voice messaging is exported to the target user.
Optionally, further include:
If not receiving the response message in the first preset duration, interaction is prompted to terminate to the target user.
Optionally, further include:
If the facial image of the target user is not detected in the second preset duration, stop detecting facial image.
Optionally, further include:
If receiving the mesh during executing interactive operation corresponding with the mood classification to the target user Other operation instruction informations that mark user sends out then execute operation corresponding with the operation instruction information.
Second aspect, the present invention provide a kind of human-computer interaction device, including:
Detection module, the facial image for detecting target user;
Identification module, the mood classification for identifying the target user according to the facial image detected;
Processing module executes and the mood for the mood classification according to the target user to the target user The corresponding interactive operation of classification.
Optionally, institute's identification module, is specifically used for:
According to the facial image, using deep learning algorithm model to the feelings of the corresponding target user of the facial image Thread classification is identified.
Optionally, further include:
Training module, for obtaining several sample facial images, the sample facial image includes the mood classification Markup information;
Deep learning algorithm model is trained according to the sample facial image.
Optionally, the mood classification includes one or more of:Tranquil mood classification, angry mood classification, sadness Mood classification and happy emoticon classification.
Optionally, the processing module, is specifically used for:
According to the mood classification of the target user, exports corresponding voice to the target user and indicate information.
Optionally, if the mood classification is tranquil mood classification, the voice indicates information, for inquiring the mesh Mark the demand of user;
If the mood classification is angry mood classification, the voice indicates information, for prompting the target user Whether music is played;
If the mood classification is sad mood classification, the voice indicates information, for prompting the target user Whether sadness is needed to pacify;
If the mood classification is happy emoticon classification, the voice indicates information, for prompting the target user Whether take pictures.
Optionally, further include:
Receiving module receives the mesh after the corresponding voice instruction information to target user output The response message that information is indicated for the voice that mark user inputs robot;
The processing module executes if being to indicate the voice confirmation message of information for the response message Operation corresponding with voice instruction information;
If the response message is refusal instruction information, response voice messaging is exported to the target user.
Optionally, the processing module, is additionally operable to:
If not receiving the response message in the first preset duration, interaction is prompted to terminate to the target user.
Optionally, the detection module, is additionally operable to:
If the facial image of the target user is not detected in the second preset duration, stop detecting facial image.
Optionally, the processing module, is additionally operable to:
If receiving the mesh during executing interactive operation corresponding with the mood classification to the target user Other operation instruction informations that mark user sends out then execute operation corresponding with the operation instruction information.
The third aspect, the present invention provide a kind of robot, including:
Processor;And memory, the executable instruction for storing the processor;
Wherein, the processor is configured to execute described in any one of first aspect by executing the executable instruction The step of method.
Fourth aspect, the present invention provide a kind of computer readable storage medium, are stored thereon with computer program, the journey The step of any one the method provided in first aspect is provided when sequence is executed by processor.
Man-machine interaction method, device, robot and storage medium provided in an embodiment of the present invention, detect the people of target user Face image;The mood classification of target user is identified according to the facial image detected;According to the mood classification of target user, to mesh It marks user and executes interactive operation corresponding with mood classification, corresponding interactive operation can be made according to different mood classifications, To reach consistent with target user on emotional expression, and robot is actively according to the interaction of the mood classification feedback identified Operation is more diversified, so that robot more personalizes, is more intimate, interactive experience is more preferable.
Description of the drawings
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is the flow diagram of one embodiment of man-machine interaction method provided by the invention;
Fig. 2 is the structural schematic diagram of one embodiment of human-computer interaction device provided by the invention;
Fig. 3 is the structural schematic diagram of robotic embodiment provided by the invention.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments Those skilled in the art illustrate the concept of the disclosure.
Specific implementation mode
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent method and apparatus of some aspects be described in detail in claims, the disclosure.
Term " comprising " and " having " in description and claims of this specification and the attached drawing and they appoint What is deformed, it is intended that is covered and non-exclusive is included.Such as contain the process of series of steps or unit, method, system, production The step of product or equipment are not limited to list or unit, but further include the steps that optionally do not list or unit, or Further include optionally for the intrinsic other steps of these processes, method, product or equipment or unit.
Scene according to the present invention is illustrated first:
The function of intelligent robot is stronger and stronger at present, can carry out communication with user, realize corresponding function Operation, such as user can send out robot voice instruction, and instruction robot plays music etc..User is generally required actively to call out Wake up robot, and with robot interactive, robot could execute corresponding instruction, and interactive mode is more inflexible, dumb, user's body It tests poor.
It, can be with the facial image of active detecting target user, and then according to target in the man-machine interaction method of the present embodiment The mood classification of user executes interactive operation corresponding with mood classification to target user, improves user experience.
Fig. 1 is the flow diagram of one embodiment of man-machine interaction method provided by the invention.As shown in Figure 1, the present embodiment The method of offer, including:
Step 101, the facial image for detecting target user.
Step 102, the mood classification that target user is identified according to the facial image detected.
Specifically, the facial image of robot active detecting target user, after detecting facial image, to face figure As being identified, the mood classification of the target user is identified.
Optionally, if the facial image of target user is not detected in the second preset duration, stop detecting face figure Picture.Second preset duration is, for example, 2 seconds.
Optionally, mood classification may include one or more of:Tranquil mood classification, angry mood classification, sadness Mood classification and happy emoticon classification.
Optionally, step 102 can specifically be realized in the following way:
According to facial image, using deep learning algorithm model to the mood classification of the corresponding target user of facial image into Row identification.
The mood classification of target user can be judged by deep learning algorithm model.
In order to improve the accuracy rate and recognition efficiency of identification, before being identified, deep learning algorithm model is adopted It is generated with such as under type:
Several sample facial images are obtained, sample facial image includes the markup information of mood classification;
Deep learning algorithm model is trained according to sample facial image.
Deep learning algorithm model is trained, reach the algorithm model voluntarily can identify mesh according to facial image Mark the current mood classification of user.
In practical applications, the generation and use of deep learning algorithm model can be that same executive agent or difference are held Row main body, the present invention do not limit this.
When training, mainly according to a large amount of sample facial image, and those sample facial images include the mark of mood classification Information is noted, for markup information such as smile degree value, markup information can be by by the position of the key feature points in facial image Set and extract the characteristic informations of key feature points as characteristic area and obtain, key feature points for example including eyebrow, eyelid, lip, Chin etc..
Step 103, the mood classification according to target user execute interaction behaviour corresponding with mood classification to target user Make.
Specifically, according to the above-mentioned mood class identified, interactive operation corresponding with the mood classification, such as mood are executed Classification is tranquil mood classification, then executes the corresponding interactive operation of tranquil mood classification, such as can be operation of commonly saying hello, defeated Go out voice prompt, what is the need for and ask to target user's inquiry.
Optionally, step 103 can specifically be realized in the following way:
According to the mood classification of target user, exports corresponding voice to target user and indicate information.
It, can be defeated to target user according to the mood classification identified due to the development of current speech identification technology Go out corresponding voice instruction information, to be interacted by voice and target user, easy to operate, better user experience.
Further, if mood classification is tranquil mood classification, voice indicates information, the need for inquiring target user It asks;
If mood classification is angry mood classification, voice indicates information, for prompting whether target user plays music;
If mood classification is sad mood classification, voice indicates information, for prompting whether target user needs sadness It pacifies;
If mood classification is happy emoticon classification, voice indicates information, for prompting whether target user takes pictures.
If mood classification is tranquil mood classification, the interactive operation commonly said hello is executed, you can to export voice instruction Information, such as greet and inquire to target user:" hello, what may I ask and needed to help you?".
If mood classification is angry mood classification, the interactive operation that indignation is pacified is executed, output voice indicates information, example As actively inquired target user:" Hi, seeing you, some are exciting, and playing a first melodious music for you releives?".
If mood classification is sad mood classification, the sad interactive operation pacified is executed, output voice indicates information, example As actively inquired user:" Hi, what's the matter?Want to say a joke to you?".
If mood classification is happy emoticon classification, the corresponding interactive operation of happy category, output voice instruction letter are executed Breath, such as actively inquire user:" this smile is all well and good, wants to clap Zhang Zhao to you?".
Further, the facial image detected can also be shown on a display screen, or the photo after taking pictures.It can also prompt Whether user preserves, such as displays whether on interactive interface the prompt message preserved, is selected for user, user's clicking operation or Robot can store photo or delete photo depending on the user's operation after voice operating.
Optionally, it after exporting corresponding voice instruction information, can also proceed as follows:
Receive the response message that information is indicated for voice that target user inputs robot;
If response message is that the confirmation message of information is indicated voice, execute operation corresponding with voice instruction information;
If response message is refusal instruction information, response voice messaging is exported to target user.
Further, if not receiving response message in the first preset duration, interaction is prompted to terminate to target user.
If specifically, receive the response message of target user, such as target user sends out voice instruction, answers:" good / it can then execute operation corresponding with voice instruction information with/OK ";If mood classification is angry mood classification, voice instruction Information after the confirmation message for receiving target user, then executes for prompting whether target user plays music and plays music Operation.
If receiving the response message of target user, such as target user sends out voice instruction, answers:" without/be not required to Want/or not should not ", then it can export response voice messaging, such as voice broadcast to target user:" it is good, there are other needs can To cry out me again.".Then it can continue detection facial image, further identify the mood classification of the target user.If default one Facial image is not detected in duration, then stops detecting, since the camera of robot is in alignment with target user in detection process Face, at this time robot may opposite original position have rotated certain angle, i.e., robot follows always in detection process The face of target user, therefore robot can revert to home position after stopping detecting.
If not receiving the response message of target user for a long time, target user's interaction is prompted to terminate, such as play and terminate Prompt tone.First preset duration is, for example, 5 seconds.
Optionally, the method for the present embodiment further includes:
If receiving the mesh during executing interactive operation corresponding with the mood classification to the target user Other operation instruction informations that mark user sends out then execute operation corresponding with the operation instruction information.
If in the interactive process of robot and target user, robot receives other operations that the target user sends out Command information, such as broadcasting program broadcast are then immediately performed the corresponding operation of the operation instruction information.
The man-machine interaction method of the present embodiment detects the facial image of target user;Known according to the facial image detected The mood classification of other target user;According to the mood classification of target user, friendship corresponding with mood classification is executed to target user Interoperability, corresponding interactive operation can be made according to different mood classifications, with reach on emotional expression with target user's phase Unanimously, and interactive operation of the robot actively according to the mood classification feedback identified is more diversified, so that machine People more personalizes, is more intimate, interactive experience is more preferable.
Fig. 2 is the structure chart of one embodiment of human-computer interaction device provided by the invention, as shown in Fig. 2, the people of the present embodiment Machine interactive device, including:
Detection module 201, the facial image for detecting target user;
Identification module 202, the mood classification for identifying the target user according to the facial image detected;
Processing module 203 executes and the feelings for the mood classification according to the target user to the target user The corresponding interactive operation of thread classification.
Optionally, institute's identification module 202, is specifically used for:
According to the facial image, using deep learning algorithm model to the feelings of the corresponding target user of the facial image Thread classification is identified.
Optionally, further include:
Training module, for obtaining several sample facial images, the sample facial image includes the mood classification Markup information;
Deep learning algorithm model is trained according to the sample facial image.
Optionally, the mood classification includes one or more of:Tranquil mood classification, angry mood classification, sadness Mood classification and happy emoticon classification.
Optionally, the processing module 203, is specifically used for:
According to the mood classification of the target user, exports corresponding voice to the target user and indicate information.
Optionally, if the mood classification is tranquil mood classification, the voice indicates information, for inquiring the mesh Mark the demand of user;
If the mood classification is angry mood classification, the voice indicates information, for prompting the target user Whether music is played;
If the mood classification is sad mood classification, the voice indicates information, for prompting the target user Whether sadness is needed to pacify;
If the mood classification is happy emoticon classification, the voice indicates information, for prompting the target user Whether take pictures.
Optionally, further include:
Receiving module receives the mesh after the corresponding voice instruction information to target user output The response message that information is indicated for the voice that mark user inputs robot;
The processing module 203 is held if being to indicate the voice confirmation message of information for the response message Row operation corresponding with voice instruction information;
If the response message is refusal instruction information, response voice messaging is exported to the target user.
Optionally, the processing module 203, is additionally operable to:
If not receiving the response message in the first preset duration, interaction is prompted to terminate to the target user.
Optionally, the detection module 201, is additionally operable to:
If the facial image of the target user is not detected in the second preset duration, stop detecting facial image.
Optionally, the processing module 203, is additionally operable to:
If receiving the mesh during executing interactive operation corresponding with the mood classification to the target user Other operation instruction informations that mark user sends out then execute operation corresponding with the operation instruction information.
The device of the present embodiment can be used for executing the technical solution of above method embodiment, realization principle and technology Effect is similar, and details are not described herein again.
The human-computer interaction device of the present embodiment, detection module, the facial image for detecting target user;Identification module, Mood classification for identifying the target user according to the facial image detected;Processing module, for according to The mood classification of target user executes interactive operation corresponding with the mood classification to the target user, and processing module can According to different mood classifications, to make corresponding interactive operation, to reach consistent with target user on emotional expression, and Interactive operation of the robot actively according to the mood classification feedback identified is more diversified, so that robot is more anthropomorphic Change, more intimate, interactive experience is more preferable.
Fig. 3 is the structure chart of robotic embodiment provided by the invention, as shown in figure 3, the robot includes:
Processor 301, and, the memory 302 of the executable instruction for storing processor 301.
Wherein, processor 301 is configured to execute via the executable instruction is executed corresponding in preceding method embodiment Method, specific implementation process may refer to preceding method embodiment, and details are not described herein again.
Optionally, the robot in the embodiment of the present invention can also include:
Camera 303, for detecting facial image.
Optionally, it can also include audio component (not shown), including loud speaker and microphone.
Optionally, it can also include display screen (not shown).
It can be connected by bus between above-mentioned each component.
The robot of the present embodiment, processor can make corresponding interactive operation, to reach according to different mood classifications It is consistent with target user on to emotional expression, and robot is actively according to the interactive operation of the mood classification feedback identified It is more diversified, so that robot more personalizes, is more intimate, interactive experience is more preferable.
A kind of computer readable storage medium is also provided in the embodiment of the present invention, is stored thereon with computer program, it is described Realize that corresponding method in preceding method embodiment, specific implementation process may refer to when computer program is executed by processor Preceding method embodiment, implementing principle and technical effect are similar, and details are not described herein again.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.The present invention is directed to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and includes the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claims are pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claims System.

Claims (10)

1. a kind of man-machine interaction method, which is characterized in that including:
Detect the facial image of target user;
The mood classification of the target user is identified according to the facial image detected;
According to the mood classification of the target user, interaction behaviour corresponding with the mood classification is executed to the target user Make.
2. according to the method described in claim 1, it is characterized in that, described in the facial image identification that the basis detects The mood classification of target user, including:
According to the facial image, using deep learning algorithm model to the mood class of the corresponding target user of the facial image It is not identified.
3. according to the method described in claim 2, it is characterized in that, the deep learning algorithm model is given birth in the following way At:
Several sample facial images are obtained, the sample facial image includes the markup information of the mood classification;
Deep learning algorithm model is trained according to the sample facial image.
4. according to claim 1-3 any one of them methods, which is characterized in that the mood classification includes following a kind of or more Kind:Tranquil mood classification, angry mood classification, sad mood classification and happy emoticon classification.
5. according to the method described in claim 4, it is characterized in that, the mood classification according to the target user, to institute It states target user and executes interactive operation corresponding with the mood classification, including:
According to the mood classification of the target user, exports corresponding voice to the target user and indicate information.
6. according to the method described in claim 5, it is characterized in that,
If the mood classification is tranquil mood classification, the voice indicates information, the need for inquiring the target user It asks;
If the mood classification is angry mood classification, the voice indicates information, for whether prompting the target user Play music;
If the mood classification is sad mood classification, the voice indicates information, for whether prompting the target user Sadness is needed to pacify;
If the mood classification is happy emoticon classification, the voice indicates information, for whether prompting the target user It takes pictures.
7. method according to claim 5 or 6, which is characterized in that described to export corresponding voice to the target user After indicating information, further include:
Receive the response message that information is indicated for the voice that the target user inputs robot;
If the response message is to indicate the voice confirmation message of information, execute corresponding with voice instruction information Operation;
If the response message is refusal instruction information, response voice messaging is exported to the target user.
8. a kind of human-computer interaction device, which is characterized in that including:
Detection module, the facial image for detecting target user;
Identification module, the mood classification for identifying the target user according to the facial image detected;
Processing module executes and the mood classification for the mood classification according to the target user to the target user Corresponding interactive operation.
9. a kind of robot, which is characterized in that including:
Processor;And
Memory, the executable instruction for storing the processor;
Wherein, the processor is configured to require any one of 1~7 side by executing the executable instruction come perform claim The step of method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is handled The step of any one of claim 1~7 the method is realized when device executes.
CN201810237056.7A 2018-03-21 2018-03-21 Man-machine interaction method, device, robot and storage medium Pending CN108733209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810237056.7A CN108733209A (en) 2018-03-21 2018-03-21 Man-machine interaction method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810237056.7A CN108733209A (en) 2018-03-21 2018-03-21 Man-machine interaction method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN108733209A true CN108733209A (en) 2018-11-02

Family

ID=63940872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810237056.7A Pending CN108733209A (en) 2018-03-21 2018-03-21 Man-machine interaction method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN108733209A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189953A (en) * 2018-08-27 2019-01-11 维沃移动通信有限公司 A kind of selection method and device of multimedia file
CN109512441A (en) * 2018-12-29 2019-03-26 中山大学南方学院 Emotion identification method and device based on multiple information
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN110228073A (en) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 Active response formula intelligent robot
CN110379234A (en) * 2019-07-23 2019-10-25 广东小天才科技有限公司 A kind of study coach method and device
CN111176430A (en) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 Interaction method of intelligent terminal, intelligent terminal and storage medium
CN111306692A (en) * 2019-10-18 2020-06-19 珠海格力电器股份有限公司 Human-computer interaction method and system of air conditioner, air conditioner and storage medium
CN111327772A (en) * 2020-02-25 2020-06-23 广州腾讯科技有限公司 Method, device, equipment and storage medium for automatic voice response processing
CN111507149A (en) * 2020-01-03 2020-08-07 京东方科技集团股份有限公司 Interaction method, device and equipment based on expression recognition
CN111741116A (en) * 2020-06-28 2020-10-02 海尔优家智能科技(北京)有限公司 Emotion interaction method and device, storage medium and electronic device
CN111782052A (en) * 2020-07-13 2020-10-16 湖北亿咖通科技有限公司 Man-machine interaction method in vehicle
CN111931897A (en) * 2020-06-30 2020-11-13 华为技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN113158707A (en) * 2020-01-22 2021-07-23 青岛海尔电冰箱有限公司 Refrigerator interaction control method, refrigerator and computer readable storage medium
US11169743B2 (en) 2017-09-05 2021-11-09 Huawei Technologies Co., Ltd. Energy management method and apparatus for processing a request at a solid state drive cluster

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549841A (en) * 2015-12-02 2016-05-04 小天才科技有限公司 Voice interaction method, device and equipment
CN106844750A (en) * 2017-02-16 2017-06-13 深圳追科技有限公司 Emotion is pacified in a kind of robot based on customer service man-machine interaction method and system
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
US20170337476A1 (en) * 2016-05-18 2017-11-23 John C. Gordon Emotional/cognitive state presentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549841A (en) * 2015-12-02 2016-05-04 小天才科技有限公司 Voice interaction method, device and equipment
US20170337476A1 (en) * 2016-05-18 2017-11-23 John C. Gordon Emotional/cognitive state presentation
CN106844750A (en) * 2017-02-16 2017-06-13 深圳追科技有限公司 Emotion is pacified in a kind of robot based on customer service man-machine interaction method and system
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11169743B2 (en) 2017-09-05 2021-11-09 Huawei Technologies Co., Ltd. Energy management method and apparatus for processing a request at a solid state drive cluster
CN109189953A (en) * 2018-08-27 2019-01-11 维沃移动通信有限公司 A kind of selection method and device of multimedia file
CN111176430A (en) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 Interaction method of intelligent terminal, intelligent terminal and storage medium
CN111176430B (en) * 2018-11-13 2023-10-13 奇酷互联网络科技(深圳)有限公司 Interaction method of intelligent terminal, intelligent terminal and storage medium
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN109512441A (en) * 2018-12-29 2019-03-26 中山大学南方学院 Emotion identification method and device based on multiple information
CN110228073A (en) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 Active response formula intelligent robot
CN110379234A (en) * 2019-07-23 2019-10-25 广东小天才科技有限公司 A kind of study coach method and device
CN111306692A (en) * 2019-10-18 2020-06-19 珠海格力电器股份有限公司 Human-computer interaction method and system of air conditioner, air conditioner and storage medium
CN111507149A (en) * 2020-01-03 2020-08-07 京东方科技集团股份有限公司 Interaction method, device and equipment based on expression recognition
CN111507149B (en) * 2020-01-03 2023-10-27 京东方艺云(杭州)科技有限公司 Interaction method, device and equipment based on expression recognition
CN113158707A (en) * 2020-01-22 2021-07-23 青岛海尔电冰箱有限公司 Refrigerator interaction control method, refrigerator and computer readable storage medium
CN111327772B (en) * 2020-02-25 2021-09-17 广州腾讯科技有限公司 Method, device, equipment and storage medium for automatic voice response processing
CN111327772A (en) * 2020-02-25 2020-06-23 广州腾讯科技有限公司 Method, device, equipment and storage medium for automatic voice response processing
CN111741116A (en) * 2020-06-28 2020-10-02 海尔优家智能科技(北京)有限公司 Emotion interaction method and device, storage medium and electronic device
CN111741116B (en) * 2020-06-28 2023-08-22 海尔优家智能科技(北京)有限公司 Emotion interaction method and device, storage medium and electronic device
CN111931897A (en) * 2020-06-30 2020-11-13 华为技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
WO2022001606A1 (en) * 2020-06-30 2022-01-06 华为技术有限公司 Interaction method and apparatus, and electronic device and storage medium
CN111782052A (en) * 2020-07-13 2020-10-16 湖北亿咖通科技有限公司 Man-machine interaction method in vehicle
CN111782052B (en) * 2020-07-13 2021-11-26 湖北亿咖通科技有限公司 Man-machine interaction method in vehicle

Similar Documents

Publication Publication Date Title
CN108733209A (en) Man-machine interaction method, device, robot and storage medium
US10893236B2 (en) System and method for providing virtual interpersonal communication
US11548147B2 (en) Method and device for robot interactions
JP6902683B2 (en) Virtual robot interaction methods, devices, storage media and electronic devices
CN107340865B (en) Multi-modal virtual robot interaction method and system
US9711056B1 (en) Apparatus, method, and system of building and processing personal emotion-based computer readable cognitive sensory memory and cognitive insights for enhancing memorization and decision making skills
CN109635616B (en) Interaction method and device
CN109036405A (en) Voice interactive method, device, equipment and storage medium
AU2017228574A1 (en) Apparatus and methods for providing a persistent companion device
US20210280172A1 (en) Voice Response Method and Device, and Smart Device
WO2018006470A1 (en) Artificial intelligence processing method and device
KR20100001928A (en) Service apparatus and method based on emotional recognition
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
CN111027425A (en) Intelligent expression synthesis feedback interaction system and method
CN109101663A (en) A kind of robot conversational system Internet-based
CN111508491A (en) Intelligent voice interaction equipment based on deep learning
KR20210070029A (en) Device, method, and program for enhancing output content through iterative generation
CN113703585A (en) Interaction method, interaction device, electronic equipment and storage medium
Miksik et al. Building proactive voice assistants: When and how (not) to interact
CN106649712A (en) Method and device for inputting expression information
US11819996B2 (en) Expression feedback method and smart robot
US20200143235A1 (en) System and method for providing smart objects virtual communication
CN108073272A (en) A kind of control method for playing back and device for smart machine
WO2021084810A1 (en) Information processing device, information processing method, and artificial intelligence model manufacturing method
Dai et al. Group Interaction Analysis in Dynamic Context $^{\ast} $

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181102