CN107278302B - Robot interaction method and interaction robot - Google Patents

Robot interaction method and interaction robot Download PDF

Info

Publication number
CN107278302B
CN107278302B CN201780000646.1A CN201780000646A CN107278302B CN 107278302 B CN107278302 B CN 107278302B CN 201780000646 A CN201780000646 A CN 201780000646A CN 107278302 B CN107278302 B CN 107278302B
Authority
CN
China
Prior art keywords
user
information
module
supplemented
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780000646.1A
Other languages
Chinese (zh)
Other versions
CN107278302A (en
Inventor
张涛
黄晓庆
骆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd filed Critical Shenzhen Qianhaida Yunyun Intelligent Technology Co ltd
Publication of CN107278302A publication Critical patent/CN107278302A/en
Application granted granted Critical
Publication of CN107278302B publication Critical patent/CN107278302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A robot interaction method comprising the following user information collection steps: acquiring the field information of the user, and calculating user interaction parameters according to the field information; when the user interaction parameters meet requirements, items to be supplemented are inquired and determined from the user characteristic information in a centralized mode, relevant communication scene information is determined from a communication scene library according to the items to be supplemented, and a question is actively asked for the user through voice and/or images based on the communication scene information relevant to the items to be supplemented; and acquiring voice and/or image feedback information of the user, extracting related content associated with the item to be supplemented in the feedback information, and storing the related content to the user characteristic information set.

Description

Robot interaction method and interaction robot
Technical Field
The invention relates to the field of robot interaction, in particular to a robot interaction method and an interactive robot.
Background
With the development of network transmission and big data technology and the improvement of hardware processing capacity, more and more robots have entered into people's family life. The current man-machine interaction mode is basically to ask questions and answer by machines, and although the answer modes are various and more intelligent, most robots passively receive the question information of users. There is no deep connection established between the robot and the user.
For example, chinese patent application No. 201610970633.4 discloses a robot human-machine interaction method and system, the robot human-machine interaction system includes: the first acquisition module is used for acquiring laser signals; the second acquisition module is used for acquiring a voice signal; the first execution module is used for exciting different preset actions according to different laser receivers corresponding to the laser signals; and the second execution module is used for executing the corresponding preset action and/or the corresponding preset voice according to the voice signal.
At present, the human-computer interaction mode is basically a human-machine questioning and machine answering mode, and most of the human-machine interaction modes are passive receiving of questioning information of users. The passive interaction mode enables the user information which is lacked by the robot to be provided by the user through the mode that the robot repeatedly asks questions in the interaction at one time only when the robot is actually used. Some robot processes, such as booking an air ticket and a train ticket, require even ten information items to complete the final predetermined task. However, the user experience is not good in the mode of acquiring all requirements in one interaction in a question-and-answer mode, the usable information stored by the robot is far away from the requirements of the user, and the user is likely to give up using the voice interaction mode and switch back to the touch screen for control. Therefore, how to effectively collect user information without affecting user experience is an urgent problem to be solved.
Therefore, the robot interaction method in the prior art still needs to be improved.
Disclosure of Invention
The invention provides a robot interaction method and a robot, wherein the robot interaction method and the robot actively interact with a user to collect various information and habit preference of the user in various interaction modes at proper time in daily operation of the robot, continuously self-perfect a user characteristic information set to support subsequent questioning requests of the user, and realize providing responses closest to the questioning requests with the least voice questioning and answering times.
In a first aspect, an embodiment of the present invention provides a robot interaction method, including the following user information collection steps:
acquiring the field information of the user, and calculating user interaction parameters according to the field information;
when the user interaction parameters meet requirements, items to be supplemented are inquired and determined from the user characteristic information in a centralized mode, relevant communication scene information is determined from a communication scene library according to the items to be supplemented, and a question is actively asked for the user through voice and/or images based on the communication scene information relevant to the items to be supplemented;
and acquiring voice and/or image feedback information of the user, extracting related content associated with the item to be supplemented in the feedback information, and storing the related content to the user characteristic information set.
In a second aspect, an embodiment of the present invention further provides an interactive robot, including: the audio acquisition module, audio identification module, image acquisition module and image identification module still include question answering module and user information perfect module:
the question-answering module comprises an exchange scene library and a characteristic information set;
the question-answering module is used for acquiring the field information of the user and calculating user interaction parameters according to the field information;
the information perfecting module is used for repeatedly determining items to be supplemented and perfecting the characteristic information base, and is used for inquiring and determining the items to be supplemented from the characteristic information of the user in a centralized manner when the user interaction parameters meet requirements, determining related communication scene information from the communication scene base according to the items to be supplemented, and actively asking the user through voice and/or images based on the communication scene information related to the items to be supplemented; and acquiring voice and/or image feedback information of the user, extracting related content associated with the item to be supplemented in the feedback information, and storing the related content to the user characteristic information set.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
the device comprises a memory, a communication component, an audio data collector and a video data collector, wherein the memory is in communication connection with the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, invoke data of the audio data collector and the video data collector to establish a connection with the cloud server through the communication component, so that the at least one processor can execute the method.
In a fourth aspect, the present invention also provides a non-transitory computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method described above.
In a fifth aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-volatile computer-readable storage medium, the computer program including program instructions, which, when executed by a computer, cause the computer to perform the method as described above.
The robot interaction method and the robot provided by the embodiment of the invention have the beneficial effects that the robot interaction method and the robot can automatically select a proper time in the daily operation of the robot, actively interact with the user to collect various information and habit preferences of the user through various preset interaction modes of the communication scene library, continuously and self-perfect the user characteristic information set to support the subsequent question request of the user, and realize that the response closest to the question request is provided with the least voice question and answer times.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a system block diagram of an interactive robot provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a user information improvement module of the interactive robot according to an embodiment of the present invention;
FIG. 3 is a main flowchart of a robot interaction method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a robot responding to a user according to an embodiment of the present invention;
fig. 5 is an implementation flowchart of the robot interaction method according to the embodiment of the present invention, in which the items to be supplemented are user habits and preferences;
fig. 6 is an implementation flowchart of a robot interaction method according to an embodiment of the present invention, in which an item to be supplemented is a psychological attribute;
fig. 7 is an implementation flowchart of a robot interaction method in which an item to be supplemented is an associated character according to an embodiment of the present invention;
fig. 8 is a schematic hardware structure diagram of an electronic device of a robot interaction method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a robotic interaction system framework provided by embodiments of the present invention; and
fig. 10 is an exemplary diagram of a robot interaction method in which an item to be supplemented is an associated character according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
According to the robot interaction method and the robot provided by the embodiment of the invention, the robot actively interacts with a person in daily life to actively collect user information, and especially actively collect the information which is lack or needs to be confirmed in the current user characteristic information set, so that the improvement of user attributes is accelerated. The invention adopts a targeted active information inquiry and acquisition mode to efficiently perfect the characteristic information set of the user, establish the deep relation between the robot and the user and provide faster and more intimate user experience for subsequent human-computer interaction.
Referring to fig. 1, a block diagram of the interactive robot is shown.
The interactive robot 10 according to the present embodiment includes a processing unit 12, an audio acquiring module 20, an audio recognizing module 22, an image acquiring module 30, an image recognizing module 32, a responding module 40, and a transmitting and receiving unit 810. The interactive robot further includes a question answering module 40, a user information perfecting module 50, and an answering module 60.
The interactive robot 10 is wirelessly connected to the cloud server 100, and sends messages to the cloud server 100 and receives data from the cloud server 100. In an embodiment, the mobile terminal of the user is also connected to the cloud server 100 and establishes a contact with the robot owned by the user, so that the user can exchange data and information with the robot at home through the mobile terminal when the user goes out of the home.
The question-answering module 40 includes a communication scenario library 42 and establishes a feature information set corresponding to the user.
The question-answering module 40 acquires the field information of the user and calculates the user interaction parameters according to the field information;
the information refinement module 50 repeatedly determines the item to be supplemented and refines the characteristic information base. When the interaction parameters meet the requirements, the information completing module 50 queries and determines the items to be supplemented in the feature information set, and determines the communication scene information from the communication scene library 42 according to the items to be supplemented. The robot can issue questions to the user according to the communication scene information, wherein the communication scene information comprises the question scenes and the topics related to the items to be supplemented. For example, the item to be supplemented is a diet preference, and the questioning scene can be determined as a family scene of breakfast just getting up according to the current time such as 07:00, the number of people who exchange is one, and the questioning subject is weather or breakfast; or still taking the item to be supplemented as the dietary preference for example, according to the current environmental parameters, such as 13:00 noon and 35 ℃ temperature, the questioning scene can be determined as a noon family scene, the number of people to communicate one, and the questioning subject is weather experience or favorite beverage and the like according to the current temperature.
The user is actively asked by voice and/or image based on the relevant communication scene information. The information completing module 50 obtains the voice and/or image feedback information of the user, extracts the related content associated with the item to be supplemented in the feedback information, and stores the related content to the feature information set. The site information includes time, place, temperature, user voice information, user video information, and other communication conditions and environmental parameters set by the user. The interaction parameter indicates the suitability degree of human-computer interaction, for example, the range of the interaction parameter is 0-10, the interaction parameter value is more than 5, the interaction is suggested, and the optimal interaction opportunity is obtained when the interaction parameter is 10.
In this embodiment, the service provider may update the communication scenario library periodically through the cloud server 100.
Referring to fig. 9, a frame diagram of a robot interactive system is shown. The robot interaction system includes the cloud server 100 and a plurality of robots 10 connected to the cloud server 100. Wherein, each robot 10 can bind at least one user, and each user can bind at least one mobile terminal 15. For example, the bot 10-1 binds two users A1 and A2, user A1 binds the mobile terminal 15-1, the bot 10-2 binds one user B, and user B binds the mobile terminal 15-2. The robot 10 can upgrade the system and update the communication scene library 42 through the cloud server 100.
The response module 60 completes the response to the user question request based on the continuously self-perfected user characteristic information set, and realizes the purpose of providing the response closest to the question request with the least number of voice question-answering times. The response module 60 receives a request initiated by a user through voice and/or image, extracts associated content from the feature information set which is continuously improved according to the request, and responds to the request of the user after prejudging the associated content.
In specific implementation, the response module 60 extracts a matching keyword from the voice and/or image request, establishes a feature information classification relation table of the feature information set, extracts associated content with the closest classification relation from the feature information set according to the matching keyword, and determines communication scene information from the communication scene library according to the associated content to respond to the user request.
The items to be supplemented may be all content items related to the user's attributes. The following receives an updated and refined implementation of the items to be supplemented from three aspects of user habits and preferences, psychological attributes, and associated characters.
Referring to fig. 2, the information completing module 50 includes an inserting module 51, a testing module 53, an extracting module 55, and a determining module 57. When the item to be supplemented is the user habit and preference, the insertion module 51 inserts the question of the user habit and preference in the question-answering module chatting dialogue. When the item to be supplemented is a psychological attribute, the test module 53 obtains a psychological test question from the cloud server 100 and completes the psychological test question locally. When the item to be supplemented is the associated character information, the extraction module 55 obtains the associated character appearing in the user voice information by using a voice recognition technology, and extracts the associated character appearing in the video information by using a face recognition technology. The judgment module 57 judges the relevance of the associated person.
When the item to be supplemented is the user habit and preference, the information perfecting module 50 selects a chatting scene and a subject, and the inserting module 51 inserts a question of the user habit and preference in the question-answering module chatting conversation. The insertion module 51 obtains the feedback information of the user, extracts the related content associated with the habit and preference of the user in the feedback information, and stores the related content to the feature information set.
As an embodiment for supplementing the habit and preference of the user, the audio acquiring module 20 is a microphone for collecting the sound of the environment around the robot, and the image acquiring module 30 is a camera for capturing images. For example, after the robot is idle for a period of time or after determining that the current user may not be in a busy state through the environmental parameters of the user, for example, the user reads a book many times in the evening at the current time according to the history, the robot 10 or the mobile terminal 15 of the user may actively initiate a dialog to collect information according to the to-be-supplemented item which is absent or yet to be confirmed in the currently stored user attributes, which may take the following form but is not limited to the following form:
based on the chat scenario and topic, the robot can directly initiate a session, such as:
"host I want you well, you do nothing of I for a long time"
"what are you busy for the owner? Need me help you how? "
"will you be a severe haze in tomorrow, do not help you see the purifier? "
And the like.
The insertion module 51 interworks with some collections of user information, especially those to be complemented which are missing or yet to be confirmed in the user attributes, such as:
"what color you like? "
"do you like to eat chicken or beef? "
"do not want to take lunch break? "
And the like, and some habits and preferences of the user can be obtained in a natural way without causing the user to feel the objections.
From the perspective of user experience, the attributes of the user are never complete, and no matter how much information to be supplemented is collected, there is information that is required in the conversation.
As an embodiment of improving the psychological attribute of the user, the to-be-supplemented item is a psychological attribute, the information improvement module 50 further includes a test module 53, when the psychological attribute needs to be improved, the test module 53 initiates a request for obtaining a psychological test question to the cloud server 100, and receives the psychological test question selected by the cloud server 100 for the age, sex, and experience of the user. The testing module 53 displays the corresponding psychological test questions to the user through a display interface arranged by the robot, such as a touch display screen. The user can manually complete the psychological test question through the touch display screen or complete the psychological test in a voice interaction mode. The information completing module 50 sends the completed psychological test questions to the cloud server 100, and the cloud server 100 analyzes the returned psychological test answers to obtain an analysis result and sends the analysis result back to the requesting robot. The test module 53 receives the analysis result returned by the cloud server 100 for the psychological test, and stores the analysis result in the feature information set.
As an embodiment for improving the psychological attribute, the robot 10 downloads the psychological test questions for the user attribute from the cloud server 100, and actively presents the psychological test questions to the corresponding user of the robot through the touch display screen, or may make the test process vivid in combination with a voice manner, which may take the following forms but is not limited to the following forms:
"is this test said to be correct, do not want to try? "
'Wa' is woollen like my choice "
' o? How can you choose this? Not your style at all! "
' good score and high Wo, good Chongbai you! "
And the like. After the psychological question and answer is finished, the robot 10 provides the test answer to the cloud server 100 for analysis, and returns the psychological test analysis result of the user to the robot terminal, and stores the psychological test analysis result into the attribute corresponding to the user characteristic information set. In the psychological test process, the robot collects character information and preference information related to the psychological attributes of the user while the user obtains fun.
As an embodiment for perfecting the related persons, that is, when the item to be supplemented is related person information, the extracting module 55 obtains the daily voice information and video information of the user, extracts the related persons appearing in the voice information and stores the related persons into the feature information set, and extracts the related persons appearing in the video information and stores the related persons into the feature information set, which is the first step of establishing the related person file. The extracting module 55 needs to continuously obtain the daily voice information and video information of the user, extract a new associated person, and count the times of the stored associated persons.
After identifying and saving a plurality of related persons a plurality of times, the judging module 57 judges the relevance of the existing related persons to the user. In particular implementations, the relevance is based on a statistical number of occurrences of each associated person.
The information improvement module 50 also includes a anticipation module 59. The pre-judging module 59 counts the number of occurrences of all identified associated persons, and scans and counts the number of occurrences of each associated person, and compares the number with a set number threshold to judge whether the current associated person needs to perfect the feature information.
When a question is asked for the associated person of which the correlation exceeds a set threshold, the information perfection module 50 for completing the question asking for the associated person acquires the current site information of the user and generates a user interaction parameter; and when the interaction parameters are proper, determining communication scene information from the communication scene library according to the locked associated characters, and actively asking the user in combination with voice and images to improve the characteristic information of the locked associated characters.
The site information includes time, place, temperature, user voice information, user video information, and other communication conditions and environmental parameters set by the user.
As an embodiment of perfecting the associated people, a specific scenario is to interact with a user through photos or other private data. For example, when a user takes a photo, the AI program in the robot terminal searches the photo album, and recognizes each person in all photos in the photo album through the image acquisition module 30 and the image recognition module 32 in a face recognition or image recognition manner, the pre-judging module 59 counts the number of times each person appears, and it is assumed that the result in a certain search is as follows:
character A is known 75 times (who character A was previously known by this or other means)
Unknown character B30 times
Character C is known 22 times (who character C was previously known by this or other means)
Unknown figure D3 times
Unknown character E1 times
If the pre-judging module 59 finds that the occurrence frequency of a certain unknown person or some unknown persons exceeds a set frequency threshold, for example, 20 times, the unknown person is set as a person to be asked for questions, in this case, the unknown person B is set as the person to be asked for questions, the question and answer module 40 obtains the environmental parameters of the user and judges an appropriate time (for example, when the user is idle, when the user browses photos, or when the person to be asked for next time is taken again) to select an optional photo selected by the person to be asked for active conversation, for example:
"this beauty is good and beautiful, really star-like! Exactly who is this? "
"who is the handsome guy beside you? Family wants to know the relation between them "
While the screen display is as shown in figure 10.
According to the answer of the user, the content of the answer, such as name, relationship (such as wife, child, parent and the like) and face recognition characteristic value, is extracted and stored, and the content of the answer is saved to the corresponding characteristic information set of the user.
For the unknown persons not reaching the threshold value of the set times, if the degree of association between the unknown person D and the unknown person E in the example and the user is possibly small, the user does not make active questioning, but if the user actively mentions the unknown person not reaching the threshold value of the set times in the conversation process with the user, the unknown person is calibrated by the same method as above and added into the feature information set corresponding to the user.
Through the identification and question answering of the people in the photo for many times, the characteristic information and the associated information of a plurality of associated people related to the user can be determined, the understanding of the machine to the user is deepened, and the association relation storage of the invention can also develop more application scenes. In the embodiment of the invention, in a scene needing the photos, the user only needs to provide the requirements by voice, and then the interactive robot can directly extract the photos from the characteristic information set corresponding to the user and cut out the most suitable character image to submit, so that the user communication time is saved, and the human-computer interaction experience and the working efficiency of the user are improved.
Referring to fig. 3, an embodiment of the present invention further relates to a robot interaction method and a robot information collection method.
The robot information collection method comprises the following steps of user information collection:
the method comprises the following steps: establishing an exchange scene library and establishing a characteristic information set corresponding to a user;
step two: acquiring the field information of the user, and calculating user interaction parameters according to the field information; the site information includes time, place, temperature, user voice information, user video information, and other communication conditions and environmental parameters set by the user. The interaction parameter indicates the suitability degree of human-computer interaction, for example, the range of the interaction parameter is 0-10, the interaction parameter value is more than 5, namely, the interaction is suggested, and when the interaction parameter is 10, the optimal interaction opportunity is obtained;
step three: when the interactive parameters meet the requirements, inquiring and determining the items to be supplemented in the feature information set, determining related communication scene information from the communication scene library according to the items to be supplemented, and actively asking the user through voice and/or images based on the related communication scene information;
step four: acquiring voice and/or image feedback information of the user, extracting related content associated with an item to be supplemented in the feedback information, and storing the related content to the feature information set; extracting related content of the voice feedback information, which is related to an item to be supplemented, extracting related content of the video feedback information, which is related to the item to be supplemented, and judging the relevance of the related content and the item to be supplemented; in order to ensure the matching accuracy, establishing an associated classification table of the item to be supplemented; identifying feedback content from the voice and/or image feedback information; and determining whether the feedback content of the user is subject and can be stored according to the associated classification table of the item to be supplemented. And if the relevant content extracted from the feedback information is associated with the item to be supplemented, saving the relevant content to the feature information set.
Step five: and determining the next item to be supplemented, and repeating the step two to the step four. In the step, the robot repeatedly judges whether the current interaction parameters of the user meet the preset threshold value, and finds out the time suitable for questioning and exchanging to complete the next user attribute information to be completed and supplemented.
Preferably, the robot 10 updates the library of communication scenes periodically. Or the cloud server 100 updates the communication scene library periodically to provide a fine communication experience.
Referring to fig. 4, the robot interaction method means that the robot performs question asking and interaction with the user based on the continuously improved user feature information set, and the part of the work is performed by the response module 60. The method mainly comprises the following steps: receiving a request initiated by a user through voice and/or images; extracting the associated content from the continuously improved characteristic information set according to the request, and responding to the request of the user after prejudging the associated content.
In an embodiment, the method comprises the following steps:
step 202: the response module 60 establishes a feature information classification relation table of the feature information set;
step 204: receiving a request for a voice and/or an image of a user; extracting matching keywords from the voice and/or image request;
step 206: extracting the associated content with the closest classification relation from the feature information set which is continuously improved according to the matching keywords;
step 208: and determining the communication scene information from the communication scene library according to the associated content and then responding to the request of the user.
Referring to fig. 5, when the item to be supplemented is the habit and preference of the user, the processing procedure is as follows:
step 302: when the items to be supplemented are user habits and preferences, selecting chatting scenes and themes
Step 304: inserting questions about user habits and preferences in the chat session;
step 306: and acquiring feedback information of the user, extracting relevant content associated with user habits and preferences in the feedback information, and storing the relevant content to the characteristic information set.
Referring to fig. 6, when the item to be supplemented is a psychological attribute, the robot may locally store the psychological test question library, or may obtain the psychological test question from the cloud server. The psychological test questions are the most targeted psychological test questions selected according to the age, sex and experience of the user. And after the robot locally inquires or receives the psychological test questions, the psychological test questions are finished through a display interface or the user is asked by voice to finish the psychological test. Embodiments of cloud server analysis assignment of psychometric test questions are described below.
Step 402: when the item to be supplemented is the psychological attribute, acquiring a psychological test question from the cloud server; finishing the psychological test question through a display interface or asking questions to the user by adopting voice to finish the psychological test;
step 404: sending the completed psychological test to a cloud server;
step 406: and receiving an analysis result returned by the cloud server aiming at the psychological test, and storing the analysis result to the characteristic information set.
Referring to fig. 7, when the item to be supplemented is the associated character information, the robot identifies the associated character, and determining the associated character further includes:
step 502: extracting the associated persons appearing in the voice information, and storing the associated persons to the feature information set; extracting the associated persons appearing in the video information, and storing the associated persons to the characteristic information set;
step 504: judging the relevance of the associated person;
counting the occurrence times of all identified associated persons;
scanning the occurrence frequency of each associated person, comparing the occurrence frequency with a set frequency threshold value and judging whether the current associated person needs to perfect the characteristic information;
step 506: asking questions about related persons with the relevance exceeding a set threshold;
the step of inquiring the associated character comprises the steps of acquiring the site information of the user and generating user interaction parameters; when the interaction parameters are proper, determining the communication scene information from the communication scene library according to the locked associated characters, and
step 508: and actively sending a question to the user by combining voice and images to perfect the characteristic information of the locked associated person.
According to the robot interaction method and the robot provided by the embodiment of the invention, the robot actively interacts with a person in daily life to actively collect user information, and especially actively collect the information which is lack or needs to be confirmed in the current user characteristic information set, so that the improvement of user attributes is accelerated. According to the invention, the characteristic information set of the user is efficiently updated and perfected based on the mode that the robot actively asks and acquires information, the deep relation between the robot and the user is established, and faster and more intimate user experience is provided for subsequent human-computer interaction.
The robot provided by the embodiment of the invention is different from the working mode of the traditional robot, and the traditional robot mainly initiates a conversation by a human and asks a machine to answer. In the embodiment, the robot or the mobile terminal bound by the robot can select a proper communication opportunity in daily operation, actively interact with the user and collect various feature information and habit preference of the user, especially feature information which is lacked in user attributes or needs to be confirmed, through various question and answer modes such as chatting, photo communication combined with image recognition/face recognition, psychological test questions and the like, so that the purpose of self-accelerating and improving a user feature information set is achieved.
The robot realizes feedback processing which is most suitable for user requirements through minimum communication input based on a continuously self-perfected user characteristic information set, reduces the times of voice question answering or the number of information required to be filled by the user as much as possible, provides more intelligent and more careful service for the user, and enables the user to experience more upper floors.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device 600 of a robot interaction method according to an embodiment of the present invention, where as shown in fig. 8, the electronic device 600 includes:
one or more processors 610, a memory 620, an audio data collector 630, a video data collector 640, a communication component 650, and a display unit 660, one processor 610 being taken as an example in fig. 8. The output of the audio data collector is the input of the audio identification module, and the output of the video data collector is the input of the video identification module. The memory 620 stores instructions executable by the at least one processor 610, and the instructions when executed by the at least one processor invoke data of the audio data collector and the video data collector to establish a connection with a cloud server through the communication component 650, so that the at least one processor can execute the robot interaction method.
The processor 610, the memory 620, the display unit 660 and the human-computer interaction unit 630 may be connected by a bus or other means, and fig. 8 illustrates the connection by the bus as an example.
The memory 620, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the robot interaction method in the embodiment of the present invention (for example, the insertion module 51, the test module 53, the extraction module 55, the judgment module 57, and the anticipation module 59 shown in fig. 2). The processor 610 executes various functional applications of the server and data processing by executing nonvolatile software programs, instructions, and modules stored in the memory 620, that is, implements the robot interaction method in the above-described method embodiment.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the robot electronic device, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 620 optionally includes memory located remotely from the processor 610, which may be connected to the robotically interacting electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 620, and when executed by the one or more processors 610, perform the robot interaction method in any of the above-described method embodiments, for example, perform the above-described method steps one to five in fig. 3, perform the above-described method steps 202 to 208 in fig. 4, and implement the functions of the insertion module 51, the test module 53, the extraction module 55, the judgment module 57, the anticipation module 59, and the like in fig. 2.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, for example, to perform the method steps one to five in fig. 3 described above, and to perform the method steps 202 to 208 in fig. 4 described above, so as to implement the functions of the insertion module 51, the test module 53, the extraction module 55, the judgment module 57, the prejudgment module 59, and the like in fig. 2.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (16)

1. A robot interaction method is characterized by comprising the following user information collection steps:
acquiring the field information of the user, and calculating user interaction parameters according to the field information, wherein the field information comprises time, place, temperature, user voice information, user video information and communication conditions and environment parameters set by the user, and the interaction parameters indicate the suitability degree of human-computer interaction;
when the user interaction parameters meet requirements, items to be supplemented are inquired and determined from user characteristic information in a centralized mode, relevant communication scene information is determined from a communication scene library according to the items to be supplemented and the field information, and the user is actively asked through voice and/or images based on the communication scene information relevant to the items to be supplemented, wherein the communication scene information comprises a question scene and a question subject corresponding to the items to be supplemented;
and acquiring voice and/or image feedback information of the user, extracting related content associated with an item to be supplemented in the feedback information, and storing the related content to the user characteristic information set.
2. The method of claim 1, further comprising the step of:
receiving a request initiated by a user through voice and/or images;
and extracting associated content from the feature information set according to the request, and responding to the request of the user after prejudging the associated content.
3. The method of claim 2,
establishing a characteristic information classification relation table of the characteristic information set;
extracting matching keywords from the voice and/or image request;
and extracting the associated content with the closest classification relation from the feature information set which is improved continuously according to the matching keywords, and determining a response scene and a theme from the communication scene library according to the associated content to answer the request of the user.
4. The method according to any one of claims 1 to 3, wherein the items to be supplemented are user habits and preferences, the communication scenario is specifically a chat communication scenario, and the actively asking the user specifically comprises:
inserting questions about user habits and preferences in the chat session;
and acquiring feedback information of the user, extracting related content associated with user habits and preferences in the feedback information, and storing the related content to the characteristic information set.
5. The method according to any one of claims 1 to 3, wherein when the item to be supplemented is a psychological attribute, acquiring a psychological test question from a cloud server, and completing the psychological test question through a display interface or asking a question to the user by using voice to complete the psychological test, comprises:
sending the completed psychological test to the cloud server;
receiving an analysis result returned by the cloud server for the psychological test;
and saving the analysis result to the characteristic information set.
6. The method according to any one of claims 1 to 3, wherein when the item to be supplemented is associated character information, acquiring user voice information and video information, further comprising:
extracting the associated persons appearing in the voice information, and storing the associated persons to the feature information set;
extracting associated persons appearing in the video information, and storing the associated persons to the feature information set;
judging the relevance of the associated people;
asking questions about related persons with the relevance exceeding a set threshold;
the step of inquiring the associated characters comprises the steps of acquiring the site information of the user and generating user interaction parameters; and when the interaction parameters are proper, determining the communication scene information from the communication scene library according to the locked associated characters, and actively asking the user in combination with the voice and the image to improve the characteristic information of the locked associated characters.
7. The method of claim 6, wherein the step of determining the relevance of the associated person comprises:
counting the occurrence times of all identified associated persons;
and scanning the occurrence frequency of each associated person, comparing the occurrence frequency with a set frequency threshold value and judging whether the current associated person needs to perfect the characteristic information.
8. The utility model provides an interactive robot, includes that audio frequency obtains module, audio identification module, image acquisition module and image identification module, its characterized in that still includes question answering module and user information perfect module:
the question-answering module comprises an exchange scene library and a user characteristic information set;
the question-answering module is used for acquiring the field information of the user and calculating user interaction parameters according to the field information, wherein the field information comprises time, place, temperature, user voice information, user video information, communication conditions set by the user and environment parameters, and the interaction parameters indicate the suitability degree of human-computer interaction;
the information perfecting module is used for repeatedly determining items to be supplemented and perfecting the characteristic information base, and is used for inquiring and determining the items to be supplemented from the characteristic information of the user in a centralized manner when the user interaction parameters meet requirements, determining related communication scene information from the communication scene base according to the items to be supplemented and the field information, and actively asking the user through voice and/or images based on the communication scene information related to the items to be supplemented, wherein the communication scene information comprises a question scene and a subject corresponding to the items to be supplemented;
and acquiring voice and/or image feedback information of the user, extracting related content associated with the item to be supplemented in the feedback information, and storing the related content to the user characteristic information set.
9. The interactive robot of claim 8, further comprising a response module, wherein the response module is configured to receive a request initiated by a user through voice and/or image, extract associated content from the continuously improved feature information set according to the request, and respond to the request of the user after prejudging the associated content.
10. The interactive robot of claim 9, wherein the response module is configured to establish a feature information classification relation table of the feature information set, extract matching keywords from the voice and/or image request, extract associated content with closest classification relation from the continuously refined feature information set according to the matching keywords, and determine communication scenario information from the communication scenario library according to the associated content to respond to the user request.
11. The interactive robot of any one of claims 8-10, wherein when the items to be supplemented are user habits and preferences, the information improvement module further comprises an insertion module, the information improvement module is configured to select a chatting scene and a topic, the insertion module is configured to insert a question about the user habits and preferences in the question-answering module chatting session, and is configured to obtain feedback information of the user, extract related content in the feedback information that is associated with the user habits and preferences, and store the related content in the feature information set.
12. The interactive robot of any one of claims 8 to 10, wherein when the item to be supplemented is a psychological attribute, the information improvement module further comprises a test module, the test module is configured to obtain a psychological test question from a cloud server, complete the psychological test question through a display interface or ask the user with a voice to complete the psychological test, and the information improvement module is further configured to send the completed psychological test to the cloud server, receive an analysis result returned by the cloud server for the psychological test, and store the analysis result in the feature information set.
13. The interactive robot of any one of claims 8-10, further comprising an extraction module and a judgment module when the item to be supplemented is associated character information,
the extraction module is used for acquiring voice information and video information of a user, extracting associated persons appearing in the voice information, storing the associated persons to the feature information set, extracting associated persons appearing in the video information, and storing the associated persons to the feature information set;
the judging module is used for judging the relevance of the associated people;
when a question is asked for the associated person with the relevance exceeding a set threshold value, the information perfecting module is further used for acquiring the site information of the user and generating a user interaction parameter; and when the interaction parameters are proper, determining the communication scene information from the communication scene library according to the locked associated characters, and actively asking the user in combination with the voice and the image to improve the characteristic information of the locked associated characters.
14. The interactive robot of claim 13, wherein the information improvement module further comprises a anticipation module configured to:
counting the occurrence times of all identified associated persons;
and scanning the occurrence frequency of each associated person, comparing the occurrence frequency with a set frequency threshold value and judging whether the current associated person needs to perfect the characteristic information.
15. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
the memory is in communication connection with the at least one processor, and the communication component, the audio data collector and the video data collector are connected with the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, the instructions, when executed by the at least one processor, to invoke data of an audio data collector and a video data collector to establish a connection with a cloud server through a communication component to enable the at least one processor to perform the method of any of claims 1-7.
16. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1-7.
CN201780000646.1A 2017-03-02 2017-03-02 Robot interaction method and interaction robot Active CN107278302B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/075435 WO2018157349A1 (en) 2017-03-02 2017-03-02 Method for interacting with robot, and interactive robot

Publications (2)

Publication Number Publication Date
CN107278302A CN107278302A (en) 2017-10-20
CN107278302B true CN107278302B (en) 2020-08-07

Family

ID=60076556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780000646.1A Active CN107278302B (en) 2017-03-02 2017-03-02 Robot interaction method and interaction robot

Country Status (2)

Country Link
CN (1) CN107278302B (en)
WO (1) WO2018157349A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885837A (en) * 2017-11-09 2018-04-06 北京光年无限科技有限公司 A kind of interaction output intent and intelligent robot for intelligent robot
CN108090170B (en) * 2017-12-14 2019-03-26 南京美桥信息科技有限公司 A kind of intelligence inquiry method for recognizing semantics and visible intelligent interrogation system
CN116541498A (en) * 2018-01-04 2023-08-04 微软技术许可有限责任公司 Providing emotion care in a conversation
CN110415688B (en) * 2018-04-26 2022-02-08 杭州萤石软件有限公司 Information interaction method and robot
CN108985205A (en) * 2018-07-04 2018-12-11 青岛海信移动通信技术股份有限公司 Face recognition demenstration method and device
JP7252327B2 (en) * 2018-10-10 2023-04-04 華為技術有限公司 Human-computer interaction methods and electronic devices
CN109597559A (en) * 2018-12-10 2019-04-09 联想(北京)有限公司 A kind of exchange method, device and electronic equipment
CN109605383B (en) * 2019-01-29 2021-05-28 达闼机器人有限公司 Information communication method, robot and storage medium
CN110265034A (en) * 2019-04-12 2019-09-20 国网浙江省电力有限公司衢州供电公司 A kind of power grid regulation auto-answer method
CN110097970A (en) * 2019-06-26 2019-08-06 北京康健数字化健康管理研究院 A kind of facial paralysis diagnostic system and its system method for building up based on deep learning
CN110297617B (en) * 2019-06-28 2021-05-14 北京蓦然认知科技有限公司 Method and device for initiating active conversation
CN110196931B (en) * 2019-06-28 2021-10-08 北京蓦然认知科技有限公司 Image description-based dialog generation method and device
CN110569806A (en) * 2019-09-11 2019-12-13 上海软中信息系统咨询有限公司 Man-machine interaction system
CN112527095A (en) * 2019-09-18 2021-03-19 奇酷互联网络科技(深圳)有限公司 Man-machine interaction method, electronic device and computer storage medium
CN111931036A (en) * 2020-05-21 2020-11-13 广州极天信息技术股份有限公司 Multi-mode fusion interaction system and method, intelligent robot and storage medium
CN111951795B (en) * 2020-08-10 2024-04-09 中移(杭州)信息技术有限公司 Voice interaction method, server, electronic device and storage medium
CN113689853A (en) * 2021-08-11 2021-11-23 北京小米移动软件有限公司 Voice interaction method and device, electronic equipment and storage medium
CN117215403A (en) * 2023-07-26 2023-12-12 北京小米机器人技术有限公司 Intelligent device control method and device, intelligent device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090001681A (en) * 2007-05-10 2009-01-09 주식회사 케이티 The modeling method of a contents/services scenario developing charts for the ubiquitous robotic companion
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN105512228A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Bidirectional question-answer data processing method and system based on intelligent robot
CN105931638A (en) * 2016-04-26 2016-09-07 北京光年无限科技有限公司 Intelligent-robot-oriented dialog system data processing method and device
CN106372195A (en) * 2016-08-31 2017-02-01 北京光年无限科技有限公司 Human-computer interaction method and device for intelligent robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086750A (en) * 2006-06-09 2007-12-12 虞玲华 A liverish expert system based on instant message
CN101604204B (en) * 2009-07-09 2011-01-05 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN106326440B (en) * 2016-08-26 2019-11-29 北京光年无限科技有限公司 A kind of man-machine interaction method and device towards intelligent robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090001681A (en) * 2007-05-10 2009-01-09 주식회사 케이티 The modeling method of a contents/services scenario developing charts for the ubiquitous robotic companion
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN105512228A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Bidirectional question-answer data processing method and system based on intelligent robot
CN105931638A (en) * 2016-04-26 2016-09-07 北京光年无限科技有限公司 Intelligent-robot-oriented dialog system data processing method and device
CN106372195A (en) * 2016-08-31 2017-02-01 北京光年无限科技有限公司 Human-computer interaction method and device for intelligent robot

Also Published As

Publication number Publication date
CN107278302A (en) 2017-10-20
WO2018157349A1 (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN107278302B (en) Robot interaction method and interaction robot
CN105975560B (en) Question searching method and device of intelligent equipment
US20170368683A1 (en) User portrait based skill package recommendation device and method
CN112073741A (en) Live broadcast information processing method and device, electronic equipment and storage medium
CN107623621B (en) Chat corpus collection method and device
JP2017153078A (en) Artificial intelligence learning method, artificial intelligence learning system, and answer relay method
CN105068661A (en) Man-machine interaction method and system based on artificial intelligence
KR20170102930A (en) Method, apparatus, storage medium and apparatus for processing Q & A information
WO2015043547A1 (en) A method, device and system for message response cross-reference to related applications
CN116095266A (en) Simultaneous interpretation method and system, storage medium and electronic device
CN110866200A (en) Service interface rendering method and device
CN103634197B (en) The method and device of multi-conference is set up in immediate communication tool
CN109376737A (en) A kind of method and system for assisting user to solve problem concerning study
KR20180050636A (en) Message service providing method for message service linking search service and message server and user device for performing the method
CN109271503A (en) Intelligent answer method, apparatus, equipment and storage medium
CN104394215A (en) Multi-user interactive learning method based on cloud network and system thereof
CN109857929A (en) A kind of man-machine interaction method and device for intelligent robot
US11294962B2 (en) Method for processing random interaction data, network server and intelligent dialog system
CN110781998A (en) Recommendation processing method and device based on artificial intelligence
CN107832342B (en) Robot chatting method and system
CN105099727B (en) Add the method and device of group member
CN111767386B (en) Dialogue processing method, device, electronic equipment and computer readable storage medium
CN107621874B (en) Content distribution method and system
CN110516153B (en) Intelligent video pushing method and device, storage medium and electronic device
CN114726818B (en) Network social method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210128

Address after: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.