WO2018033066A1 - 机器人的控制方法及陪伴机器人 - Google Patents

机器人的控制方法及陪伴机器人 Download PDF

Info

Publication number
WO2018033066A1
WO2018033066A1 PCT/CN2017/097517 CN2017097517W WO2018033066A1 WO 2018033066 A1 WO2018033066 A1 WO 2018033066A1 CN 2017097517 W CN2017097517 W CN 2017097517W WO 2018033066 A1 WO2018033066 A1 WO 2018033066A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
companion
interaction
digital person
robot
Prior art date
Application number
PCT/CN2017/097517
Other languages
English (en)
French (fr)
Inventor
杨思晓
廖衡
黄茂胜
魏建生
霍大伟
孙文华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710306154.7A external-priority patent/CN107784354B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17841038.7A priority Critical patent/EP3493032A4/en
Publication of WO2018033066A1 publication Critical patent/WO2018033066A1/zh
Priority to US16/276,576 priority patent/US11511436B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention relates to the field of artificial intelligence, and more particularly to a control method and a robot for robots in the field of artificial intelligence, in particular to a companion robot.
  • Educated Artificial Intelligence includes intelligent systems with application qualification, user education, self-learning reasoning ability, judgment ability, etc., which can help people to be more efficient and better. Complete a specific task or set of tasks.
  • parents can use smart robots to accompany their children.
  • the existing intelligent robots can communicate with children and based on them. Communicate with your child, learn and update the way you communicate with your child.
  • Embodiments of the present invention provide a robot control method, a robot, and a control information generation method and apparatus, which are capable of controlling a robot to combine companion characteristics and companionship with a companion target.
  • an embodiment of the present invention provides a control method for a robot, which implements companionship to a companion target by simulating the companion by acquiring the information and processing the data.
  • the control method includes: the robot collects the interaction information of the companion target, and acquires the digital person information of the companion, the interaction information is information sent when the companion object interacts with the robot, and may include the Accompanying the sound or action interaction information of the target to the robot, the digital person information includes digitized sets of companion information; and using the interaction information and the digital person information to determine a manner of interaction with the companion target; Depicting the digital person information of the companion, using a machine learning algorithm to select one or more content from the plurality of interactive content scores as the interactive content; generating a response action to the companion target according to the interaction manner and the interactive content .
  • the robot can be controlled to simulate the companion's companionship with the companion target when the companion cannot be accompanied by the companion, and the companion can be satisfied with the companion's personal companionship with the companion target.
  • the companionship target is the object of the robot companion, which may be a child or an elderly person.
  • the companion is a companion who is accompanying the target reality, such as the child's parents, guardians, etc., or the accompanying person of the elderly.
  • the robot may generate a score of the plurality of interactive content corresponding to the interaction mode, and determine the interaction content according to the score.
  • the latest behavior information of the companion in the last time period of the current time is obtained, and the behavior information of the companion may be collected by the companion with the mobile device, or may be The robot itself collects directly.
  • the robot generates a plurality of interactive content corresponding to the interaction mode by using a machine learning algorithm according to the digital person information and the latest behavior information of the companion, and may also be based on the digital person information and the latest behavior of the companion
  • the information is generated by using a machine learning algorithm to generate a plurality of interactive content corresponding to the interaction mode, and then determining the interactive content and the interaction manner according to the score.
  • the robot may further obtain the latest behavior information of the companion in the last time period of the current time, and the behavior information of the companion may also carry the mobile device by the companion
  • the acquisition can also be directly acquired by the robot itself.
  • the robot uses the interactive information, the digital person information, and the Describe the latest behavioral information to determine how to interact with the companionship target.
  • the robot may further acquire the latest behavior information of the companion in the last time period of the current time, where the behavior information of the companion is collected by the companion with the mobile device;
  • the latest behavior information is analyzed, and the digital person update information of the companion is obtained, and the digital person update information is used to improve or refresh the digital person information, and the digital person information is analyzed by analyzing the behavior information of the companion person.
  • human input method may be used to analyze the behavior information of the companion person.
  • the method before the obtaining the digital person information of the companion, the method further includes: adding the digital person update information with the additional weight to the digital person information to update the information Improve or refresh the digital person information.
  • the additional weight value is adjustable to increase or decrease the impact of the companion's behavior information on the digital person information during a previous time period of the current time.
  • the robot may also superimpose the digital person update information on the digital person information by a machine learning algorithm.
  • the digital person information includes one or more of the following types of information: personal basic information, personal experience information, value information, educational concept information, and behavioral habit information.
  • the robot can calculate the semantic similarity between the digital person information, the interaction information and the interaction mode, and select the interaction mode with the greatest semantic similarity as the interaction mode with the companion target.
  • the generating, according to the digital person information of the companion, the scores of the plurality of interactive content corresponding to the interaction mode includes: using the model generated by the training, generating the interaction mode A score of the plurality of interactive content, wherein the model takes the digital person information as an input, and outputs a score on the plurality of interactive content corresponding to the interaction manner as an output.
  • the companion includes a plurality of companions
  • the digital person information of the companion is a weighted summation of feature information of the plurality of companions
  • the companion feature information The weights can be pre-set or manually entered.
  • the companion includes a plurality of companions, and the digital person information of the companion is obtained by machine learning of feature information of the plurality of companions.
  • the execution subject of the method is a robot accompanying the companion target, and the digital person information of the companion is collected by the companion carrying the mobile device.
  • an embodiment of the present invention provides a robot device, which can be used as a companion robot, and the device includes: the device includes an information acquisition module, an interaction mode generation module, an interactive content generation module, and a response module; and an information acquisition module is configured to collect Accompanying the interactive information of the target and obtaining the digital information of the companion.
  • the interactive information includes the interactive information of the sound or action accompanying the target, the digital person information includes a digital companion's information set, and the interactive mode generating module is used according to the interactive information.
  • the digital person information determine the interaction mode with the companion target, use the machine learning algorithm to generate the interactive content corresponding to the interaction mode according to the digital information of the companion; the response module generates the response to the companion target according to the interaction mode and the interactive content. action.
  • the interaction mode generating module may be further configured to generate a score of the plurality of interactive content corresponding to the interaction mode, and determine the interaction content according to the score.
  • the information acquiring module is further configured to acquire latest behavior information of the companion in the last time period of the current time, where the behavior information of the companion is from the companion Carrying the mobile device with the portable device;
  • the interaction mode generating module is further configured to generate a plurality of interactive content corresponding to the interaction mode by using a machine learning algorithm according to the digital person information and the latest behavior information of the companion, or Generate the said The scores of multiple interactive content corresponding to the interactive method determine the interactive content and interaction method according to the score.
  • the information acquiring module is further configured to acquire latest behavior information of the companion in the last time period of the current time, where the behavior information of the companion is from the companion Carrying the mobile device with the collection;
  • the interaction mode generating module is configured to determine the interaction mode with the companion target by using the interaction information, the digital person information, and the latest behavior information.
  • the information acquiring module is further configured to obtain the latest behavior information of the companion in the last time period of the current time, and the behavior information of the companion is carried by the companion
  • the digital device update module is configured to obtain the digital person update information of the companion by analyzing the latest behavior information, and improve or refresh the digital person information, and the digital person information passes the companion Human behavior information analysis or human input method is determined.
  • the information acquisition module can be placed on the robot body, for example, by using a sensor or a signal acquisition module to complete information acquisition.
  • the information acquisition module may also be a remote device of the robot or a stand-alone terminal device capable of communicating with the robot, such as a smart phone, a smart wearable device or the like.
  • the digital person update module is configured to use the digital weight update information of the additional weight to superimpose Up to the digital person information, the digital person information is refined or refreshed with updated information.
  • the additional weight value is adjustable to increase or decrease the impact of the behavior information of the companion on the digital person information in a previous time period of the current time.
  • the information acquiring module is further configured to superimpose the digital person update information to the digital person information by using a machine learning algorithm.
  • the digital person information includes one or more of the following types of information: personal basic information, personal experience information, value information, educational concept information, and behavioral habit information;
  • the interaction mode generation module is configured to calculate the semantic similarity between the digital person information, the interaction information and the interaction mode, and select the interaction mode with the greatest semantic similarity as the interaction mode with the companion target.
  • the interactive content generating module is configured to generate, by using a model generated by the training, a score of the plurality of interactive content corresponding to the interaction mode, where the model uses the number The person information is input, and the score size on the plurality of interactive contents corresponding to the interaction mode is output.
  • the companion includes a plurality of companions
  • the digital person information of the companion is a weighted summation of feature information of the plurality of companions
  • the companion The weight of the feature information can be obtained in advance or manually input.
  • the companion includes a plurality of companions, and the digital person information of the companion is obtained by machine learning of feature information of the plurality of companions.
  • the executing entity of the device is a robot accompanying the companion object, and the digital person information of the companion is collected by the companion carrying the mobile device.
  • the robot provided by the embodiment of the invention can control the robot to simulate the accompanying person to accompany the companion target when the companion cannot be accompanied by the companion target, and can satisfy the requirement that the companion personally companionship with the companion target.
  • FIG. 1 is a schematic flow chart of a control method of a robot according to an embodiment of the present invention.
  • FIG. 2 is another schematic flowchart of a method for controlling a robot according to an embodiment of the present invention.
  • FIG. 3 is still another schematic flowchart of a method for controlling a robot according to an embodiment of the present invention.
  • Figure 4 is a diagram showing the relationship of various components of the system of the embodiment of the present invention.
  • FIG. 5 is a schematic architectural diagram of a robot control system in accordance with an embodiment of the present invention.
  • Fig. 6 is a structural diagram of a robot according to an embodiment of the present invention.
  • Fig. 7 is a structural diagram of a robot computer system according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for controlling a robot. As shown in FIG. 1, FIG. 1 provides a flowchart of an embodiment of the present invention. The method includes:
  • S101 Collect interactive information of the companion target and obtain digital person information of the companion.
  • the interactive information includes interactive information of the companion target to the sound or action of the robot, and the digital person information includes a digitized set of information of the companion.
  • the robot can obtain the behavior signal sent by the companion target, obtain the interactive information of the companion target through the captured behavior signal, and can understand what the companion target is doing and what to do.
  • the digital person information of the companion is a digital companion, and is a data information that enables the robot to imitate the companion.
  • the method may further generate, according to the digital person information of the companion, a score of a plurality of interactive content corresponding to the interaction mode, and select one or more highest-scoring content from the plurality of interactive content scores.
  • a score of a plurality of interactive content corresponding to the interaction mode, and select one or more highest-scoring content from the plurality of interactive content scores.
  • Determining content by scoring is a specific implementation.
  • the control method of the robot provided by the embodiment of the invention can control the robot to simulate the accompanying person to accompany the companion target when the companion can not be accompanied by the companion, and can satisfy the requirement of the companion personally companionship with the companion target.
  • the interaction information may be generated by the robot in response to the interaction request, or generated actively, or preset in advance.
  • the interactive information may be actively generated by the robot's behavior analysis of the companion target, including video capture information of the companion, voice input information of the companion, and the like.
  • the robot analyzes the behavioral action of companionship by video shooting, determines that the companion target wants to play soccer, and the robot may actively generate interactive information for playing soccer with the companion target, and perform a soccer game with the companion target.
  • the interactive information can be applied in various embodiments as part of relatively independent access to the information.
  • the robot can also interact directly with the companion target by observing the behavior of the companion target.
  • the interaction information may also be an interaction request of the received companion target.
  • the robot may respond to the companion target interaction request, together with the companion target. listen to music.
  • the interaction information may also be an interaction request for receiving the companion.
  • the robot may request the robot to accompany the child to sleep through an interaction request sent by the companion through the remote smart device, and the robot may ring. Should accompany the person's interactive request, accompany the companion to sleep.
  • the interactive information may also be interactive information set by the companion pre-installation program, for example, the companion can be set in the robot pre-installation program, and the child is given fruit at 10 o'clock every morning.
  • the digital person information of the companion includes one or more of the following types of information: personal basic information, personal experience information, value information, educational concept information, and behavioral habit information.
  • the personal basic information may include the name, gender, age, favorite color, favorite books, and the like of the companion's personal attributes
  • the personal experience information of the companion may include the life experience and learning of the companion.
  • Experience and work experience may include the religious beliefs, the value concept, and the like of the companion.
  • the behavioral habit information may include the accompanying person's daily behaviors, personal habits, hobbies, and the like, which are not limited by the present invention.
  • the interactive information may be of various sources, for example, an interaction request sent by a companion through a remote networking device, or may be actively generated by the robot by analyzing the behavior data of the companion target, and an implementation for acquiring the interactive information is as follows: Receiving an interactive request for a companion or accompanying goal, the robot analyzes the interactive request and determines the interactive information.
  • the accompanying person's behavior information can be collected by the mobile device that the companion carries with him, such as collecting the companion's voice input through the microphone, collecting the companion's video input through the camera, inputting through the mobile device's keyboard or touch screen.
  • the time period can be set in the previous time period, for example, the time period can be set to 2 hours, 12 hours, 24 hours, and the like.
  • the behavior information of the companion includes voice data of the companion, action data, or operation data for the application software.
  • the behavior information of the companion may be the voice data of the voice call of the companion, the behavioral action of the companion of the video, or the operation data of the companion operating the software in the smart device, the embodiment of the present invention This is not limited.
  • the robot may obtain the digital person information of the companion from its own memory, wherein the digital person information in the memory may be pre-stored for the data collecting device, and in the absence of the network, the robot still
  • the companion target can be accompanied by the digital person information of the local companion, which is not limited by the embodiment of the present invention.
  • the robot can receive the digital person information of the companion sent by the data collecting device; the behavior information of the companion can be obtained through the data collecting device, and the behavior information of the companion is analyzed to obtain the digital person information of the companion
  • the behavior information of the companion can be obtained through the cloud server, the digital person information is determined by the cloud server by analyzing the behavior information of the companion; the digital person information can be directly input for the companion; or the memory from the robot
  • the digital person information pre-stored in the data collection device obtained in the present invention is not limited in this embodiment of the present invention.
  • the communication feature of the companion includes multiple types of information, where the plurality of types of information includes at least two of the basic information, the speech information, and the behavior habit information of the companion, and the embodiment of the present invention does not limited.
  • the smart device carried by the companion can actively obtain the instant communication information of the companion. For example, in the instant messaging application, parents say a word to a friend, “It’s very important to exercise, and let our children exercise for an hour to read.”
  • the smart device carried by the companion can actively obtain the information of the companion person processing the companion article through the first device, specifically including the article information of the forwarding or original social network, and the comment information of the reading article, Comment information for social networking articles.
  • the smart device carried by the companion can actively obtain the information of the companion person processing the companion article through the first device, specifically including the article information of the forwarding or original social network, and the comment information of the reading article, Comment information for social networking articles.
  • parents read a new article on children's education methods, which mentioned that “3-5 years old is a crucial period for cultivating children's language”, forwarded to WeChat circle of friends, and commented “good views” or based on Parents read a piece about children’s educational methods in their electronic devices. Chapter, and annotated (text or symbol) in the article.
  • the digital person information of the companion is determined by analyzing the behavior information of the companion, the digital person information including various types of information including the personal basic information of the companion, personal experience. At least two of information, speech information, and behavioral habits.
  • the personal basic information of the companion may include the name, the gender, the age, the color of the favorite, the favorite book, and the like, and the information related to the personal attribute of the companion, which is not limited by the present invention.
  • the companion's personal experience information may include the companion's life experience, learning experience, and work experience.
  • the accompanying person's speech information may include the accompanying person's religious beliefs, professional ideas, the views of the educator recognized by the companion, and the educational philosophy that the accompanying person values.
  • the accompanying person's behavioral habit information may include the accompanying person's daily behaviors, personal habits, hobbies, etc. For example, the mother likes to tell the child when the accompanying child sleeps, the father likes to play football, and likes to shoot with the left foot, the present invention The embodiment does not limit this.
  • the data acquired by the data collection device in the previous time period After the data acquired by the data collection device in the previous time period, it can be stored in the storage device for the robot to read.
  • the robot captures the storybook by capturing the companion target to the study room, and the robot generates interactive information about the storytelling, and determines an interactive manner for telling the companion target, and the robot is giving the companion
  • the goal is to combine the content of the companion's digital person information, such as the tone of the companion's speech, the companion's personal experience, and the like.
  • the robot obtains the companion's habit of sleeping at 9:00 every night according to the digital information of the companion, and the robot generates interactive information of sleeping at nine o'clock in the evening, and determines an interactive manner in which the companion sleeps. At the same time, the robot will combine the educational concept information of the companion when the companion is sleeping. If the companion thinks that the child should listen to more fairy tales, the robot will tell the fairy tale when the companion sleeps, the embodiment of the present invention This is not limited.
  • the robot stores a companion learning database, which contains various types of data such as stories, children's songs, movements, encyclopedias, etc.
  • the story contains five stories, namely, "Little Turtles Watch Grandpa”, “Little Monkeys Pick Corn”, “Kitten Breeding Fish” “, Kong Rong let the pear”, “Little gecko borrows the tail.” Other types of data are not listed.
  • the companion object of the robot is the child, and the companion is the obvious parent Zhang San.
  • the robot obtains the digital information of Zhang San, and the companion Zhang San is a parent.
  • the digital person information of the companion is as follows:
  • the companionship goal is a 4-year-old child who can express his thoughts by speaking and understand the meaning of some basic actions.
  • the robot obtains interactive information, including the words “Mom is going to work, waiting for the mother to come back to tell you the story is good”, “Ming the most obedient” and the robot will use the speech recognition algorithm to “mother” I have to go to work, and when my mother comes back to tell you the story, "the most obedient” is recognized as text, and then the natural language processing method is used to identify the interactive information of "storytelling".
  • the interactive information may be actively generated by the robot to analyze the behavior information of the companion, or may be an interactive information sent by the companion, may be an interaction request received by the companion, or may be accompanied by the robot.
  • the behavior is analyzed and generated actively, and may also be interactive information set by the pre-installation program, which is not limited by the embodiment of the present invention.
  • the robot searches the story database using one or more pieces of information of the first companion's digital person information knowledge base as a keyword when the story is presented to the companion target, and searches for the searched keyword matching.
  • the story is explained to the child.
  • the robot searches in the companion learning database using the keyword of the first companion when companionship with the companion target, such as searching for the first companion in the behavioral habit information of the first companion.
  • the hobby of a person is running, and when the robot is companionship, the robot behavior model related to running can be collected, and the robot is guided to accompany the companion target according to the model.
  • the digital person information is a collection of information consisting of the values, educational concepts, and family index of the companion, G, G Contains various information about the companion, such as ⁇ hometown, university, religion, age, interest... ⁇ , the contents of the information library include but not limited to the above examples. As the collected information increases, the dimension can be expanded to hundreds of thousands or even thousands. Magnitude.
  • the network or robot side maintains a larger story database, or companion learning database, that matches the digital person information.
  • the one or more information of the companion digital person information is used as a keyword to search the story database, and the story matching the keyword is searched for the child to explain.
  • the keyword of the companion is used to search in the companion learning database.
  • the hobby information the hobby of the companion is running, and when companion companion, the robot related to running can be collected.
  • a behavioral model that directs the robot to accompany companion objects in accordance with this model.
  • the robot may further pre-store the digital person information of the companion, or the robot may obtain the pre-existed digital person information of the companion from the cloud server, where the digital person information includes, but is not limited to, the companion's hometown, Accompanied by the life experience of the companion, the career of the companion, the hobbies of the companion, the values of the companion, or the companion
  • the robot may also interact with the companion target in conjunction with the digital information of the pre-existing companion, which is not limited by the embodiment of the present invention.
  • the cloud server or robot side maintains a larger story database, or companion learning database, that matches the digital human knowledge base.
  • Figure 2 provides yet another flow diagram of an embodiment of a method of the present invention.
  • the method further includes:
  • the behavior information here can be the latest behavior information of the companion. By setting the time span of the period before the current moment, the frequency of obtaining the latest behavior of the companion can be adjusted.
  • the behavior information of the companion can be collected by the companion with the mobile device, even if the companion is not in the robot or accompanying the target, the robot can obtain the behavior information, and the robot can better simulate the companion object or let the robot Better understand the way or thought of the companion's companionship.
  • the steps S102 and S101 do not have a limitation of the execution order, and the step S102 may be before or after the step S101.
  • S105 Using a machine learning algorithm to generate a score of the plurality of interactive content corresponding to the interaction mode according to the digital person information of the companion and the latest behavior information.
  • the interaction and interaction can be determined based on the score, such as selecting the highest or higher score.
  • the foregoing S103 uses the interaction information and the digital person information to determine that the interaction with the companion target can use the interaction information, the digital person information, and The latest behavior information determines a manner of interaction with the companion target.
  • the process of modifying or updating the digital person information after obtaining the latest behavior information of the companion in the last time period at the current time is obtained.
  • S1021 obtains digital person update information of the companion by analyzing the latest behavior information, and the digital person update information is used to improve or refresh the digital person information.
  • the digital person information can be determined by analyzing the behavior information of the companion or by artificial input.
  • Obtaining the digital person update information of the companion by analyzing the latest behavior information specifically includes: converting the behavior data into text information in a plurality of manners, for example, for voice input, using voice recognition data and speech processing to perform voice behavior data Convert to text; use a variety of natural language processing techniques to convert the above text information into the latest behavior information, including but not limited to one or more of keyword recognition, topic extraction, focus detection, etc.
  • the method gives each newest behavior information a certain weight, such as the companion pre-set weights.
  • S1022 uses the digital weight update information of the additional weight to be superimposed on the digital person information to update or refresh the digital person information by updating the information.
  • the S1 robot obtains the digital person update information of the companion by analyzing the latest behavior data.
  • S2 updates the digital person information of the companion in a certain manner according to the updated digital person information. For example, if a certain weight w is set for the digital person information at the current moment, the update method is as follows:
  • f is the value of the feature of the digital person information that needs to be updated
  • w is the weight
  • f0 is the latest number of the companion The value of this feature of the word person information.
  • the additional weight value w is adjustable to increase or decrease the influence of the companion's behavior information on the digital person information during a time period of the current time.
  • the digital person information f is more stable, and contains more companion information
  • the f0 digital person update information reflects the latest digital person information changes, including less companion information, if you want to accompany the child. The method is more affected by the behavior information of the companion in the previous period, and the influence of more companion information in f is reduced, and the weight value of w can be increased.
  • the weight of each type of information in the plurality of types of information may be set when the companion preloads the program, may be sent to the robot by the companion's interactive request, or may be the robot according to the companion
  • the digital person update information is superimposed to the digital person information by a machine learning algorithm.
  • the step of superimposing the digital person update information on the digital person information using a machine learning algorithm is as follows:
  • S1 reads the digital person information and the latest behavior information of the companion at the last moment
  • S3 compares the digital information of the current moment with the digital information of the previous moment, and obtains the characteristic dimension and the amount of change of all the information.
  • S4 repeats S1-S3 on the data of multiple time periods of multiple companions, and obtains the feature dimension and corresponding change amount of the digital person information changes of multiple companions in multiple time periods.
  • S5 takes the behavior information of the companion in a period of time as input, the characteristic dimension of the change and the corresponding change amount as the output, and uses LASSO regression as the model. After training, the model M and M are taken as input and the behavior is changed. Feature dimension and amount of change as output
  • S7 modifies the digital person information according to the changed feature dimension and the corresponding change amount
  • the digital person information includes one or more of the following types of information: personal basic information, personal experience information, value information, educational concept information, behavioral habit information; Determining the interaction information and the digital person information, determining the interaction mode with the companion target includes: calculating a semantic similarity between the digital person information, the interaction information and the interaction mode, and selecting an interaction mode with the greatest semantic similarity as The way in which the companion object interacts.
  • S103 can have multiple implementation manners, and a typical implementation is as follows:
  • the semantic similarity between the interaction mode and the interaction information and the digital person information is calculated, and the semantic similarity can be realized by a technique such as a word vector, which is not limited by the present invention.
  • the similarity between interaction information and interaction mode is determined.
  • the generating, according to the digital person information of the companion, the scores of the plurality of interactive content corresponding to the interaction mode comprising: using the model generated by the training, generating the interaction mode a score of a plurality of interactive content, wherein the model takes the digital person information as input, and corresponds to the interaction mode The size of the score on multiple interactive content is output.
  • the training data of the generated model may be derived from data acquired by public data or other data collection devices.
  • the robot uses the word vector method to calculate the semantic similarity between the digital human information and the interactive information.
  • the format is “digital human information, interactive information, semantic similarity”:
  • the robot selects the “storytelling” with the highest similarity as the way to interact with the companion target.
  • the model for generating interactive content can adopt various algorithms, such as logistic regression, KNN, support vector machine, etc.
  • KNN the principle of KNN algorithm is to calculate K samples that are closest to the test sample distance and read The corresponding label, and the total number of labels as a percentage of the total number of samples as the score of the test sample under the label.
  • the test sample is Zhang San's digital person information
  • the label is interactive content
  • the little monkey picks corn 0.20,
  • the robot According to the interactive way of “storytelling” and the interactive content of “small gecko borrowing tail”, the robot synthesizes the voice of “small gecko borrowing tail” as a response action by using speech synthesis algorithm, and realizes the response action by playing the speaker through the speaker.
  • the behavior feedback of “speaking with me” is also very good.
  • Zhang San’s “I think that science education is very important, it is time for children to understand the reasons behind some natural phenomena”.
  • the latest behavior information is used to correct the scores of the plurality of interactive content, and one or more of the corrected interactive content scores have the highest scored content as the interactive content.
  • s is the score of the interactive content
  • p i is the i-th digital person information feature information
  • w(p i ) represents the weight of p i
  • t j is the j-th topic of the interactive content
  • s(t j ) is t j
  • the score, sim(p i ,t j ) represents the semantic similarity between the digital human feature information and the subject
  • s new represents the corrected score of the interactive content
  • a is the weight of the current behavioral intention
  • the value of a can be preceded by the companion
  • the specified or the robot is randomly generated.
  • the personal information collection of the companion includes Corresponding weights, in the companion process, if a companion search is performed on the companion object using the keyword, the search may be performed by selecting one or more pieces of information having the largest weight according to the weight. Alternatively, based on the scene, different information of the information set may be selected. For example, when companion companion game is used, the values and interests in the information set are selected as keywords, regardless of other information such as birthplace and age.
  • the companion includes a plurality of companions, the digital person information of the companion A weighted summation of feature information of the plurality of companions, and the weight of the companion feature information may be obtained in advance or manually input.
  • Multiple companion information fusion has multiple implementations.
  • a typical implementation set the corresponding weight for each companion information.
  • the weight can be manually input by the multiple companions or set by the robot.
  • the robot can
  • the weight of the companion information is set to 1/N, where N is the number of digital people, and different weights can be configured according to the importance degree of each companion (for example, multiple companions include parents and grandparents, hoping to accompany the child) Influencing the influence of parents in the increase, the weight of the parent companion can be increased, and the weight of the grandparents and accompanying persons can be reduced.
  • S2 calculates the weighted digital person information according to the weight of the plurality of digital person information, and the formula is as follows:
  • f k is the value of the kth term of the weighted digital person information
  • w i is the weight of the i th digital person information
  • f k,i is the value of the kth item information of the i th digital person.
  • the collection of information of the plurality of companions may be combined to generate a collection of information of the plurality of objects.
  • the information collection of multiple objects may also include information and weights corresponding to the information; the information weights are related to the information weights of the respective objects before the merger, and are also related to the importance of each of the multiple objects.
  • the weight of the language feature in the digital information of the mother can be weighted in the robot program.
  • the robot can weight the mother's multiple types of information, get the mother's weighted digital information, and according to the interactive information and mother
  • the weighted digital person information determines the interaction mode or the interaction content interacting with the child and generates a response action, and the behavior feedback to the child is implemented by the response action to implement the response to the interaction information, which is not limited.
  • the weight of each digital person information in the plurality of digital person information may be set when the companion preloads the program, may be sent to the robot through the companion's interactive request, or may be the robot according to the companion
  • the weight of the digital person information of the mother may be set lower than the weight of the digital information of the father in the robot program. You can even set the weight of Dad’s digital person information to 1, and the weight of the mother’s digital person information to 0.
  • the robot can weight the digital information of Dad and the digital information of the mother to get the digital information of the companion. And determining, according to the interaction information and the digital person information of the companion, at least one of an interaction mode and an interaction content interacting with the child, and generating a response action, and performing behavior feedback to the child through the response action to realize the interaction
  • the response of the information is not limited in this embodiment of the present invention.
  • the companion includes a plurality of companions whose digital person information is obtained by machine learning of feature information of the plurality of companions.
  • the steps of synthesizing the digital person information of multiple companions using a machine learning algorithm are as follows:
  • S1 reads digital person information of multiple companions
  • S2 calculates the similarity of any two companions according to the cosine similarity calculation formula.
  • S3 takes the digital person information of the companion as the apex. If the similarity of the digital person information of the two companions is greater than a certain threshold, the side is established, and the digital person information map G of the companion is obtained.
  • S4 uses PageRank algorithm for graph G to obtain the PageRank value of each vertex.
  • S5 obtains the digital person information of the companion as follows:
  • f is the information of the digital person information of the companion
  • w i is the PageRank value of the i-th companion
  • fi is the value of the i-th companion under the information
  • N is the number of companions.
  • the execution subject of the method is accompanied by a robot around the companion object, and the digital person information of the companion is acquired by the smart device carried by the companion.
  • FIG. 4 provides a relationship diagram of various components of the method of the present invention.
  • the relationship between the companion 410 and the companion object 420 involved in the execution process, the companion 410 and the companion object 420 are generated by the smart device 430 and the robot 440. interaction.
  • the smart device 430 collects the line data of the companion to obtain the latest behavior data, and the smart device 430 sends the latest behavior data to the cloud server 450, and the cloud server calculates and analyzes the behavior data to obtain the digital person information of the companion, and the digital person information is obtained.
  • the robot determines the interaction mode and interactive content with the companion target based on the interaction information and the interactive person's digital person information according to the interactive information acquired by the robot, and realizes the interaction with the companion target.
  • the possible implementation situation can omit the cloud server, and the robot obtains the digital person information directly through the behavior data calculation and analysis.
  • the robot can also directly obtain the behavior data of the companion according to the sensor component carried by the robot body.
  • FIG. 5 is a schematic structural diagram of a robot control system according to an embodiment of the present invention.
  • the control system includes at least one companion (the accompanying person 510 is shown), the companion target 520, and the smart device. 530.
  • the companion 510 is a person who is expected to be able to be often accompanied by the companion target 520 and who can educate and influence the accompanying target, such as a guardian, teacher, or the like of the companion target.
  • the smart device 530 is configured to obtain behavior information of the companion 510, determine the latest behavior information of the companion by analyzing the behavior information of the companion, and generate the digital person information by the latest behavior, and send the information to the robot 540 to pass the Digital human information controls the interaction between the robot and the companion target.
  • the smart device may extract the digital information of the companion from the behavior information of the companion by means of a semantic analysis, a machine learning algorithm, or a keyword matching, which is not limited by the embodiment of the present invention.
  • the smart device 530 may be a remote device of the robot 540, may be a dedicated device specially cooperated with the robot, or may be a smart device installed with a program that cooperates with the robot, for example, may be a mobile terminal, wearable smart Equipment or a robot accompanying the companion.
  • the smart device 530 may be a data collection device, and the data collection device may directly receive behavior information input by the companion by means of voice input, video input, or keyboard touch screen input, or the smart device 530 may The smart device mutual communication data collecting device obtains the behavior information of the companion, which is not limited by the embodiment of the present invention.
  • the robot 540 is configured to obtain the interactive person information by acquiring the digital person information of the companion, and determine at least one of the interaction mode and the interaction content interacting with the companion target according to the interaction information and the communication feature information, and generate a response action, The response to the companion target is performed by the response action to implement the response to the interactive information.
  • the cloud server 550 is used to forward or analyze the information transmitted between the smart device 530 and the robot 540.
  • the robot can receive the digital person information of the companion sent by the data collecting device; the behavior information of the companion can be obtained through the data collecting device, and the behavior information of the companion is analyzed to obtain the digital person information of the companion
  • the behavior information of the companion can be obtained through the cloud server, and the digital person information is determined by the cloud server by analyzing the behavior information of the companion; the digital person information can be directly input for the companion, and can also be the memory from the robot
  • the digital person information pre-stored in the data collection device obtained in the present invention is not limited in this embodiment of the present invention.
  • the companion's behavior information is analyzed by the data collection device or the cloud server to obtain the digital information of the companion, which can reduce the calculation amount and information processing speed of the robot and improve the performance of the robot.
  • the data collection device may be a portable smart device that is carried by the companion.
  • the smart device may be a remote device of the robot, may be a special device specially matched with the robot, and may also be installed with the robot.
  • the smart device of the program for example, may be a mobile phone, a wearable smart device, or a robot accompanying the companion, which is not limited by the embodiment of the present invention.
  • the data collecting device may obtain the behavior information of the companion by using the sensor, for example, the behavior information input by the companion in the manner of the voice input, the video input, or the touch screen input of the keyboard, which is not limited by the embodiment of the present invention.
  • FIG. 6 provides a structural diagram of an embodiment of the present invention.
  • the robot device includes: the device includes an information acquiring module 601, an interaction mode generating module 603, and a response module 607; the information acquiring module 601 is configured to collect interactive information of the companion target, and obtain digital person information of the companion.
  • the interaction information includes sound or action interaction information of the companion target to the robot, the digital person information includes digitized sets of companion information; the interaction mode generation module 603 is configured to use the interaction information according to the interaction information And the digital person information, determining a manner of interaction with the companion target, and using the machine learning algorithm to generate an interactive content corresponding to the interaction mode according to the digital person information of the companion.
  • the machine learning algorithm may also be used to generate a score of the plurality of interactive content corresponding to the interaction mode, and select one or more content from the plurality of interactive content scores as the interactive content.
  • the response module 607 is configured to generate a response action to the companion target according to the interaction manner and the interactive content.
  • the information obtaining module 601 is further configured to obtain the latest behavior information of the companion in the last time period of the current time, and the behavior information of the companion is collected by the companion on the mobile device.
  • the interaction mode generating module 603 is configured to generate a score of the plurality of interactive content corresponding to the interaction mode by using a machine learning algorithm according to the digital person information of the companion and the latest behavior information. Further, the interaction mode generating module 603 is further configured to determine, by using the interaction information, the digital person information, and the latest behavior information, how to interact with the companion target.
  • the information obtaining module is further configured to obtain the latest behavior information of the companion in the last time period of the current time, and obtain the digital person update information of the companion by analyzing the latest behavior information.
  • the digital person update information is used to improve or refresh the digital person information, and the digital person information is determined by analyzing behavior information or artificial input manner of the companion.
  • the additional weight of the digital person to New information is superimposed on the digital person information to update or refresh the digital person information with updated information.
  • the additional weight value is adjustable to increase or decrease the impact of the companion's behavior information on the digital person information during a time period of the current time.
  • the information obtaining module 601 is further configured to superimpose the digital person update information to the digital person information by using a machine learning algorithm.
  • the digital person information includes one or more of the following types of information: personal basic information, personal experience information, value information, educational concept information, behavioral habit information;
  • the module 603 is configured to calculate a semantic similarity between the digital person information, the interaction information, and the interaction mode, and select an interaction mode with the greatest semantic similarity as the interaction mode with the companion target.
  • the interaction mode generating module 603 is further configured to generate, by using the model generated by the training, a score of the plurality of interactive content corresponding to the interaction mode, wherein the model takes the digital person information as an input, and corresponds to the interaction mode. The size of the score on multiple interactive content is output.
  • the companion includes a plurality of companions
  • the digital person information of the companion is a weighted summation of feature information of the plurality of companions, and the weight of the companion feature information It can be pre-set or manually entered.
  • the companion includes a plurality of companions whose digital person information is obtained by machine learning of feature information of the plurality of companions.
  • the executive body of the device is a robot accompanying the companion object, and the digital person information of the companion is collected by the companion with the mobile device.
  • the various modules of the robot shown in FIG. 6 described above can complete and execute the process steps of the various method embodiments, and have the functions required in the method embodiments.
  • FIG. 7 illustrates another robotic device 700 of an embodiment of the present invention that includes a processor 710, a transmitter 720, a receiver 730, a memory 740, and a bus system 750.
  • the robot should also have an actuator, which can be a mechanical device, such as a robotic arm, a track/wheeled mobile mechanism, and a display, a microphone, and a camera equal to the components of the external interaction, which can be collectively referred to as an execution component.
  • the processor 710, the transmitter 720, the receiver 730 and the memory 740 are connected by a bus system 750 for storing instructions for executing instructions stored in the memory 740 to control the transmitter 720.
  • the signal is transmitted or controlled by the receiver 730.
  • Transmitter 720 and receiver 730 may be communication interfaces, and specific transmitter 720 may be an interface for receiving data or instructions, and receiver 730 may be an interface for transmitting data or instructions, no longer to transmitter 720 and The specific form of the receiver 730 is exemplified.
  • the processor may be a central processing unit (English: central processing unit, CPU for short), and the processor may also be other general-purpose processors and digital signal processors (English: digital signal) Processing, referred to as: DSP), ASIC, EEPROM, or programmable logic device, discrete logic or discrete logic components, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the various devices of the robot shown in FIG. 7 cooperate with each other under the control of the processor, and the flow steps of the various method embodiments can be completed and executed, and each function required in the method embodiment is provided.
  • the robot 700 can be used to perform various steps or processes corresponding to the data collection device in the above method embodiments.
  • the memory 740 can include read only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include a non-volatile random access memory.
  • the memory can also store information of the device type.
  • the processor 710 can be configured to execute instructions stored in a memory, and when the processor executes the instructions, various steps corresponding to the data collection device in the above method embodiments can be performed.
  • the processor may be a central processing unit (English: central processing unit, CPU for short), and the processor may also be other general-purpose processors, digital signal processing (English: digital signal processing, referred to as DSP).
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor can carry or implement the information acquiring module 601, the interaction mode generating module 603, and the control response module 607.
  • the response module 607 should be an action execution structure of the robot.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in a memory, and the processor executes instructions in the memory, in combination with hardware to perform the steps of the above method. To avoid repetition, it will not be described in detail here.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, or an electrical, mechanical or other form of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention contributes in essence or to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (English: read-only memory, abbreviated as: ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.

Abstract

一种机器人的控制方法,所述方法包括:采集陪伴目标的互动信息,获取陪伴人的数字人信息(101),所述互动信息包含所述陪伴目标对所述机器人的声音或动作互动信息,所述数字人信息包含数字化的各项陪伴人信息集合;根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式(103);根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容(105);根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作(107)。该机器人的控制方法和机器人、控制信息的生成方法和装置,能够控制机器人结合陪伴人的特征,对陪伴目标进行陪伴。

Description

机器人的控制方法及陪伴机器人 技术领域
本发明涉及人工智能领域,更具体地,涉及人工智能领域中机器人的控制方法和机器人,特别是一种陪伴式机器人。
背景技术
随着人工智能的不断发展,教育的人工智能(Educated Artificial Intelligence,Educated AI)包括具有应用限定、用户教育、自学习推理能力、判断能力等特点的智能系统,能够帮助人们更高效、更好地完成具体的任务或任务集。
在现代社会的家庭中,越来越多的父母不能时刻陪伴孩子,在不能够陪伴的孩子的时候,父母可以使用智能机器人对孩子进行陪伴,现有的智能机器人能够与孩子进行交流,并基于与孩子之间的交流,学习和更新与孩子的交流方式。
但是,现有的智能机器人不能满足未来父母对智能机器人陪伴孩子的更高陪伴要求。
发明内容
本发明实施例提供了一种机器人的控制方法和机器人、控制信息的生成方法和装置,能够控制机器人结合陪伴人的特征,对陪伴目标进行陪伴。
第一方面,本发明实施例提供了一种机器人的控制方法,由机器人通过对信息的采集和数据的处理完成对陪伴人的模仿实现对陪伴目标的陪伴。根据本发明实施例的第一方面,控制方法包括:机器人采集陪伴目标的互动信息,并获取陪伴人的数字人信息,所述互动信息是陪伴目标与机器人互动时发出的信息,可以包含所述陪伴目标对所述机器人的声音或动作互动信息,数字人信息包含数字化的各项陪伴人信息集合;使用所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式;根据所述陪伴人的数字人信息,采用机器学习算法从所述多个互动内容得分中选择一个或多个内容作为互动内容;根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作。
采用本发明提供的机器人的控制方法,能够控制机器人在陪伴人不能在陪伴目标身边时,模拟陪伴人对陪伴目标进行陪伴,能够满足陪伴人亲自对陪伴目标进行陪伴的需求。陪伴目标是机器人陪伴的对象,可能是儿童,也可以是年长的老人。陪伴人是陪伴目标现实的陪伴者,例如儿童的父母,监护人等,或者是年长老人的陪护人等。
在第一方面的一种实现方式中,机器人可以生成所述互动方式对应的多个互动内容的得分,根据得分确定互动内容。在第一方面的一种实现方式中,获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息可以由所述陪伴人随身携带移动设备采集,还可以是机器人自身直接采集。机器人根据所述陪伴人的数字人信息和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容,还可以根据所述陪伴人的数字人信息和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容的得分,再根据得分确定互动内容和互动方式。
在第一方面的一种实现方式中,机器人还可以进一步获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息也可以由所述陪伴人随身携带移动设备采集还可以是机器人自身直接采集。机器人使用所述互动信息、所述数字人信息、所 述最新行为信息,确定与所述陪伴目标的互动方式。
在第一方面的一种实现方式中,机器人还可以获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息,所述数字人更新信息用于完善或刷新所述数字人信息,所述数字人信息通过对所述陪伴人的行为信息分析或人为输入方式确定。
在第一方面的一种实现方式中所述获取陪伴人的数字人信息之前,所述方法还包括,使用附加权重的所述数字人更新信息,叠加到所述数字人信息上,以更新信息完善或刷新所述数字人信息。
在第一方面的一种实现方式中,所述附加权重数值可调整,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。进一步,机器人还可以通过机器学习的算法将所述数字人更新信息,叠加到所述数字人信息。
在第一方面的一种实现方式中,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息。机器人可以计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。
在第一方面的一种实现方式中,所述根据所述陪伴人的数字人信息,生成所述互动方式对应的多个互动内容的得分包括,使用训练生成的模型,生成所述互动方式对应的多个互动内容的得分,其中所述模型以所述的数字人信息为输入,以所述互动方式对应的多个互动内容上的得分大小作为输出。
在第一方面的一种实现方式中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是所述多个陪伴人的特征信息的加权求和,所述陪伴人特征信息的权重可预先设置或人工输入获得。
在第一方面的一种实现方式中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是通过对所述多个陪伴人的特征信息机器学习获得。
在第一方面的一种实现方式中,所述方法的执行主体是伴随于陪伴目标周围的机器人,所述陪伴人的数字人信息由所述陪伴人随身携带移动设备采集。
第二方面,本发明实施例提供了一种机器人设备,可以作为陪伴式机器人,设备包括:设备包括信息获取模块、互动方式生成模块、互动内容生成模块、响应模块;信息获取模块,用于采集陪伴目标的互动信息,并获取陪伴人的数字人信息,互动信息包含陪伴目标的声音或动作的交互信息,数字人信息包含数字化的陪伴人的信息集合;互动方式生成模块,用于根据互动信息和数字人信息,确定与陪伴目标的互动方式,根据陪伴人的数字人信息,采用机器学习算法,生成与互动方式对应的互动内容;响应模块,根据互动方式和互动内容生成对陪伴目标的响应动作。
在第二方面的一种可能的实现方式中,互动方式生成模块,还可以用于生成所述互动方式对应的多个互动内容的得分,根据得分确定互动内容。
在第二方面的一种可能的实现方式中,所述信息获取模块还用于获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;所述互动方式生成模块,还用于根据所述陪伴人的数字人信息和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容,或者生成所述 互动方式对应的多个互动内容的得分再根据得分确定互动内容和互动方式。
在第二方面的一种可能的实现方式中,所述信息获取模块还用于获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;所述互动方式生成模块用于使用所述互动信息、所述数字人信息、所述最新行为信息,确定与所述陪伴目标的互动方式。
在第二方面的一种可能的实现方式中,信息获取模块还用于获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;所述数字人更新模块用于通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息,完善或刷新所述数字人信息,所述数字人信息通过对所述陪伴人的行为信息分析或人为输入方式确定。
在实现中,信息获取模块可以置于机器人本体上,例如通过传感器或者信号采集模块完成信息获取。信息获取模块还可以是机器人的远程设备,或者是能够与机器人通信的独立终端设备,例如智能手机,智能穿戴设备等等。
在第二方面的一种可能的实现方式中,所述信息获取模块用于获取陪伴人的数字人信息之前,所述数字人更新模块用于,使用附加权重的所述数字人更新信息,叠加到所述数字人信息上,以更新信息完善或刷新所述数字人信息。
在第二方面的一种可能的实现方式中,所述附加权重数值可调整,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。
在第二方面的一种可能的实现方式中,所述信息获取模块还用于通过机器学习的算法将所述数字人更新信息,叠加到所述数字人信息。
在第二方面的一种可能的实现方式中,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息;所述互动方式生成模块用于计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。
在第二方面的一种可能的实现方式中,所述互动内容生成模块用于使用训练生成的模型,生成所述互动方式对应的多个互动内容的得分,其中所述模型以所述的数字人信息为输入,以所述互动方式对应的多个互动内容上的得分大小作为输出。
在第二方面的一种可能的实现方式中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是所述多个陪伴人的特征信息的加权求和,所述陪伴人特征信息的权重可预先设置或人工输入获得。
在第二方面的一种可能的实现方式中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是通过对所述多个陪伴人的特征信息机器学习获得。
在第二方面的一种可能的实现方式中,所述设备的执行主体是伴随于陪伴目标周围的机器人,所述陪伴人的数字人信息由所述陪伴人随身携带移动设备采集。
本发明实施例所提供的机器人,能够控制机器人在陪伴人不能在陪伴目标身边时,模拟陪伴人对陪伴目标进行陪伴,能够满足陪伴人亲自对陪伴目标进行陪伴的需求。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍。
图1是本发明实施例的机器人的控制方法的示意性流程图。
图2是本发明实施例的机器人的控制方法的另一示意性流程图。
图3是本发明实施例的机器人的控制方法的又一示意性流程图。
图4是本发明实施例系统各组成部分的关系图。
图5是本发明实施例的机器人控制系统的示意性架构图。
图6是本发明实施例的机器人的结构图。
图7是本发明实施例的机器人计算机系统的结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述。
本发明实施例提供了一种机器人的控制方法,如图1所示,图1提供了本发明一个实施例的流程图。所述方法包括:
S101、采集陪伴目标的互动信息,获取陪伴人的数字人信息。互动信息包含所述陪伴目标对机器人的声音或动作的交互信息,数字人信息包含数字化的所述陪伴人的信息集合。
机器人通过传感器、麦克风等采集器件可以获取陪伴目标发出的行为信号,通过捕获的行为信号获得陪伴目标的互动信息,能够了解陪伴目标在做什么想要做什么。陪伴人的数字人信息是数字化的陪伴人,是能够让机器人模仿陪伴人的数据信息。
S103、根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式。
S105、根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容。
还可以是根据所述陪伴人的数字人信息,采用机器学习算法,生成所述互动方式对应的多个互动内容的得分,从所述多个互动内容得分中选择一个或多个得分最高的内容作为互动内容。通过得分的方式确定内容是一种具体的实现方式。
S107、根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作。
采用本发明实施例提供的机器人的控制方法,能够控制机器人在陪伴人不能在陪伴目标身边时,模拟陪伴人对陪伴目标进行陪伴,能够满足陪伴人亲自对陪伴目标进行陪伴的需求。
具体而言,该互动信息可以为该机器人响应互动请求生成的、或者主动生成的,或者提前预设的。作为一种具体的实现,该互动信息可以由机器人对陪伴目标的行为分析主动生成,包括对陪伴人的视频抓取信息、陪伴人的语音输入信息等。例如,该机器人分析通过视频拍摄陪伴目标的行为动作,确定该陪伴目标想要踢足球,则该机器人可能主动生成与陪伴目标一起踢足球的互动信息,并执行与该陪伴目标一起踢足球的行为。互动信息可以作为相对独立获取信息的部分在各个实施例中应用。该机器人也可以通过对该陪伴目标的行为的观察,直接与该陪伴目标进行互动。
可选地,该互动信息还可以是接收到的陪伴目标的互动请求,例如,该机器人接收到陪伴目标要听音乐的互动请求,则机器人可以响应该陪伴目标的互动请求,与该陪伴目标一起听音乐。
可选地,该互动信息还可以是接收到陪伴人的互动请求,例如,该机器人可以通过陪伴人通过远程智能设备发送的互动请求,请求机器人陪孩子睡觉,则该机器人可以响 应该陪伴人的互动请求,陪陪伴目标睡觉。
可选地,该互动信息还可以是陪伴人预装程序设定的互动信息,例如陪伴人可以在机器人预装程序中设定,每天早上10点给孩子吃水果。
所述陪伴人的数字人信息包括以下类型信息中的一种或多种:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息。其中个人基本信息可以包括陪伴人的姓名、性别、年龄、喜欢的颜色、喜欢的书籍等与该陪伴人的个人属性相关的信息,陪伴人的个人经历信息可以包括该陪伴人的生活经历、学习经历和工作经历。陪伴人的价值观信息可以包括该陪伴人的宗教信仰,价值理念等,行为习惯信息可以包括陪伴人日常的行为方式、个人习惯,兴趣爱好等,本发明对此不作限定。
所述互动信息可以有多种来源,例如可以是陪伴人通过远程联网设备发送的互动请求,也可以是机器人通过分析陪伴目标的行为数据主动生成的,一种获取互动信息的实施例如下:机器人接收陪伴人或陪伴目标的互动请求,机器人分析互动请求,确定互动信息。
陪伴人的行为信息可以通过陪伴人随身携带的移动设备采集,如通过麦克风采集陪伴人的语音输入、通过摄像头采集陪伴人的视频输入、通过移动设备的键盘或触摸屏输入的一种或几种方式实现。上一时间段可设置时间段大小,例如时间段可以设置为2个小时、12小时、24小时等。可选地,该陪伴人的行为信息包括陪伴人的语音数据、动作数据或对应用软件的操作数据。例如,陪伴人的行为信息可以为该陪伴人进行语音通话的语音数据、视频拍摄的该陪伴人的行为动作、或者该陪伴人对该智能设备中的软件进行操作的操作数据,本发明实施例对此不作限定。
可选地,该机器人可以从自身的存储器中获取该陪伴人的数字人信息,其中,该存储器中的数字人信息可以为该数据收集设备提前预存的,在没有网络的情况下,该机器人仍能通过本地的陪伴人的数字人信息对陪伴目标进行陪伴,本发明实施例对此不作限定。
可选地,该机器人可以接收数据收集设备发送的陪伴人的数字人信息;可以通过数据收集设备获取陪伴人的行为信息,并对该陪伴人的行为信息进行分析得到该陪伴人的数字人信息;可以通过云服务器获取陪伴人的行为信息,该数字人信息是该云服务器通过对该陪伴人的行为信息分析确定;可以为陪伴人直接输入该数字人信息;还可以为从该机器人的存储器中获得的数据收集设备提前预存的数字人信息,本发明实施例对此不作限定。
可选地,该陪伴人的交流特征包括多种类型的信息,该多种类型的信息包括该陪伴人的基本信息、言论信息和行为习惯信息中的至少两种,本发明实施例对此不作限定。
作为一个可选实施例,陪伴人随身携带的智能设备可以主动获取陪伴人的即时通信信息。比如,父母在即时通信应用中,和朋友说了一句话“锻炼身体很重要,我们家孩子看书一个小时就让他锻炼一下”。
作为另一个可选实施例,陪伴人随身携带的智能设备可以主动获取陪伴人通过第一设备处理陪伴类文章的信息,具体包括转发、或原创社交网络的文章信息、阅读文章的批注信息,对社交网络文章的评论信息。例如:父母阅读了一篇新的关于儿童教育方法的文章,文章中提到“3-5岁是培养儿童语言的关键时期”,转发到微信朋友圈,并发表评论“观点不错”,或者基于父母在其电子设备中,阅读了某篇关于儿童教育方法的文 章,并在文章中进行了批注(文字或符号)。
应理解,陪伴人的数字人信息是通过对该陪伴人的行为信息分析确定的,该数字人信息包括多种类型的信息,该多种类型的信息包括该陪伴人的个人基本信息、个人经历信息、言论信息和行为习惯信息中的至少两种。
可选地,陪伴人的个人基本信息可以包括该陪伴人的姓名、性别、年龄、喜欢的颜色、喜欢的书籍等与该陪伴人的个人属性相关的信息,本发明对此不作限定。
可选地,陪伴人的个人经历信息可以包括该陪伴人的生活经历、学习经历和工作经历。例如,妈妈在法国出生,爸爸美国留过学,妈妈在A公司就职等,本发明对此不作限定。陪伴人的言论信息可以包括陪伴人的宗教信仰、职业理念、该陪伴人认同的教育学家的观点和陪伴人重视的教育理念。例如,妈妈信奉基督教,爸爸为非盈利组织的董事经常做慈善,教育学家的观点:语言学习的关键期理论,故事复述的能力小时候最重要;妈妈的教育观念:把单词背会很重要,小的时候记很多单词最牛;爸爸的教育观念:小的时候多知道一些天文地理的知识最酷,本发明对此不作限定。陪伴人的行为习惯信息可以包括陪伴人日常的行为方式、个人习惯,兴趣爱好等,例如,妈妈喜欢在陪伴孩子睡觉时给孩子讲故事,爸爸喜欢踢足球,并且喜欢用左脚射门,本发明实施例对此不作限定。
上一个时间段数据收集设备获取的数据后,可存储于存储设备中,以便机器人读取。
作为一个可选实施例,机器人通过视频抓拍到陪伴目标去书房拿了故事书,则该机器人生成讲故事的互动信息,并确定给该陪伴目标讲故事的互动方式,同时该机器人在给该陪伴目标讲故事时会结合该陪伴人的数字人信息中的内容,例如加入该陪伴人说话的语气,该陪伴人的亲身经历等。
作为另一个可选实施例,该机器人根据该陪伴人的数字人信息得到该陪伴人习惯每天晚上9点睡觉,则该机器人生成晚上九点睡觉的互动信息,并确定哄陪伴对象睡觉的互动方式,同时,该机器人在哄陪伴对象睡觉时会结合陪伴人的教育观念信息,假设陪伴人认为小孩子应该多听些童话,则该机器人在哄陪伴对象睡觉时会讲童话故事,本发明实施例对此不作限定。
通过举例说明一个更为具体的实例。机器人中存储有陪伴学习数据库,包含故事、儿歌、动作、百科等多种类型的数据,故事中包含5个故事,分别是“小乌龟看爷爷”,“小猴子摘玉米”,“小猫种鱼”,“孔融让梨”,“小壁虎借尾巴”。其他类型的数据不一一列举。
在一个实际的应用场景中,机器人的陪伴对象是儿童明明,陪伴人是明明的家长张三。
(1)机器人获取张三的数字人信息,陪伴人张三是一名家长,陪伴人的数字人信息如下:
年龄:30岁,
性别:女
学历:本科,
专业:金融
工作经历:创业者,曾是某金融公司职员,重视技术和科学
兴趣爱好:阅读、看电影、逛街
最喜欢的颜色:红色
世界观:积极乐观
教育理念:科普、唱儿歌、讲故事
张三在手机上给一篇名为《研究表明科普教育对4岁儿童脑发育十分有利》的文章点赞并转发,并发表评论“我认为科普教育很重要,是时候让孩子了解一些自然现象背后的原因了”,然后分享了一篇名为《让孩子尽情地唱歌吧》的文章并批注“陪孩子唱儿歌是很重要的沟通方式,虽然我不太会唱歌,但也要陪孩子一起唱”,于是生成了数字人信息中教育理念的部分信息。权重可以人工设定,例如讲故事和科普可以设为1.0,唱儿歌不是陪伴人擅长的,可以设为0.5。
陪伴目标是一名4岁儿童明明,能通过说话表达想法,理解一些基本动作的含义
(2)明明说话:“和我说说话吧”,机器人的数据收集设备获取了这个互动请求,通过语音识别、自然语言处理等过程,机器人识别了明明“陪我说说话”的互动信息。
(3)机器人获取互动信息,包括给明明说“妈妈要去上班了,等妈妈回来了给你讲故事好不好”,“明明最听话了”这两句语音以及机器人通过语音识别算法将“妈妈要去上班了,等妈妈回来了给你讲故事好不好”,“明明最听话了”这两句话识别为文本,然后利用自然语言处理方法识别了“讲故事”这个互动信息。
可选地,互动信息可以由机器人对陪伴人的行为信息进行分析主动生成,可以是接收陪伴人主动发出的互动信息,可以是接收到的被陪伴人的互动请求,还可以由机器人对陪伴目标的行为进行分析主动生成,还可以是预装程序设定的互动信息,本发明实施例对此不作限定。
作为一个可选实施例,机器人在给陪伴目标讲故事时,使用第一陪伴人的数字人信息知识库的一个或多个信息作为关键词进行搜索故事数据库,将搜索到的与关键词匹配的故事给孩子讲解。
作为另一个可选实施例,机器人在对陪伴目标进行陪伴时,使用第一陪伴人的关键词在陪伴学习数据库中进行检索,比如搜索到该第一陪伴人的行为习惯信息中该第一陪伴人的爱好是跑步,则机器人在陪伴时,可以搜集到和跑步相关的机器人行为模型,指引机器人按照此模型对陪伴目标进行陪伴。
可选的,使用陪伴人的亲情指数信息、价值观和教育理念信息,生成陪伴人的数字人信息;数字人信息是一个由陪伴人的价值观、教育理念、亲情指数组成的信息集合G,G中包含陪伴人的各种信息,如{籍贯、大学、宗教、年龄、兴趣…},信息库的内容包括但不限于上述举例,随着搜集信息的增加,维度可以扩大到百量级、甚至千量级。
网络或机器人侧维护一个和数字人信息匹配的更大的故事数据库、或陪伴学习数据库。比如机器人在给陪伴对象讲故事时,使用陪伴人数字人信息的一个或多个信息作为关键词进行搜索故事数据库,搜索到与关键词匹配的故事给孩子讲解。或者,在对陪伴对象进行陪伴时,使用陪伴人的关键词在陪伴学习数据库中进行检索,比如兴趣爱好信息中陪伴人的爱好是跑步,在陪伴陪伴对象时,可以搜集到和跑步相关的机器人行为模型,指引机器人按照此模型对陪伴对象进行陪伴。
可选地,该机器人还可以预存陪伴人的数字人信息,或者该机器人可以从云服务器获取该预存的该陪伴人的数字人信息,其中,该数字人信息包括但不限于陪伴人的家乡、陪伴人的人生经历、陪伴人的职业、陪伴人的兴趣爱好、陪伴人的价值观、或陪伴人的 宗教信仰信息等的一种或多种,该机器人还可以结合该预存的陪伴人的数字人信息与陪伴目标进行互动,本发明实施例对此不作限定。
可选地,云服务器或机器人侧维护一个和数字人信息知识库匹配的更大的故事数据库、或陪伴学习数据库。
在本方法的一个实施例中,图2提供了本发明一个方法的实施例又一流程图。所述方法还包括:
S102、获取当前时刻上一时间段内所述陪伴人的行为信息。
这里的行为信息可以是陪伴人的最新的行为信息,通过设置当前时刻前一段时间的时间跨度,能够调整获取陪伴人最新行为的频度。陪伴人的行为信息可以由所述陪伴人随身携带移动设备采集,即使陪伴人这段时间不在机器人或者陪伴目标身边依然可以让机器人获得行为信息,能够让机器人更好的模拟陪伴对象,或者让机器人更好的理解陪伴人对陪伴对象陪伴的方式或者思想。所述步骤S102和S101不存在执行顺序的限制,步骤S102可以在步骤S101之前或者之后。
S105根据所述陪伴人的数字人信息和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容的得分。可以根据得分的情况,例如选择最高或者较高的得分来确定互动方式和互动内容。
在本方法的又一个实施例中,上述S103使用所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式可以更为具体的使用所述互动信息、所述数字人信息、所述最新行为信息,确定与所述陪伴目标的互动方式。
在本方法的一个实施例中,对修改机器人获取或者存储的数字人信息,参阅图3,获取当前时刻上一时间段内所述陪伴人的最新行为信息之后,修改或者更新数字人信息的流程包括:
S1021通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息,所述数字人更新信息用于完善或刷新所述数字人信息。
数字人信息可以通过对所述陪伴人的行为信息分析或人为输入方式确定。通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息具体包括:利用多种方式将所述行为数据转换为文本信息,例如对于语音输入,通过语音识别和文本处理将语音行为数据转换为文本;利用多种自然语言处理技术将上述文本信息转换为最新行为信息,自然语言处理技术包括但不限于关键词识别、主题抽取、焦点检测等技术中的一种或多种;以一定方式给每一项最新行为信息以一定的权重,例如陪伴人预先设定权重等。
S1022使用附加权重的所述数字人更新信息,叠加到所述数字人信息上,以更新信息完善或刷新所述数字人信息。
具体的,使用附加权重的所述数字人更新信息,叠加到所述数字人信息上包括以下步骤实现:
S1机器人通过分析所述最新行为数据,获得陪伴人的数字人更新信息。
S2根据更新数字人信息按一定方式更新陪伴人的数字人信息。例如为当前时刻的数字人信息设置一定权重w,则更新方式如下:
f←f+w×f0
其中f为需要更新的数字人信息的特征上的取值,w为权重,f0为陪伴人在最新数 字人信息的该特征上的取值。
在本方法的一个实施例中,所述附加权重数值w可调整,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。在具体的实施方式中,数字人信息f更加稳定,包含的陪伴人信息更多,f0数字人更新信息体现了最新的数字人信息的变化、包含的陪伴人信息更少,若希望陪伴孩子的方式更多受到上一时间段内所述陪伴人的行为信息的影响,以及减少f中较多陪伴人信息的影响,则可加大w权重值。
可选地,该多种类型的信息中每种类型的信息的权重可以为陪伴人预装程序时设定的,可以为通过陪伴人的互动请求发送给机器人的,或者可以为机器人根据陪伴人的某些设置自己学习决定的,本发明实施例对此不作限定。
在本方法的一个实施例中,通过机器学习的算法将所述数字人更新信息,叠加到所述数字人信息。具体的,使用机器学习的算法将所述数字人更新信息,叠加到所述数字人信息的步骤如下:
S1读取陪伴人在上一时刻的数字人信息与最新行为信息;
S2获取陪伴人在当前时刻的数字人信息
S3对比当前时刻数字人信息与上一时刻数字人信息,获得所有信息发生变化的特征维度以及改变量
S4在多名陪伴人的多个时间段的数据上重复S1-S3,获得多名陪伴人在多个时间段内数字人信息发生变化的特征维度以及对应的改变量
S5将陪伴人在一个时间段内的行为信息作为输入,发生变化的特征维度以及对应的改变量作为输出,以LASSO回归作为模型,训练后得到模型M,M以行为信息作为输入,以发生变化的特征维度以及改变量作为输出
S6对所述数字人信息和所述最新行为信息,采用模型M,得到在所述数字人信息上发生变化的特征维度以及对应的改变量,
S7根据发生变化的特征维度以及对应的改变量修改所述数字人信息
在本方法的一个实施例中,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息;S103所述使用所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式包括:计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。
具体的,S103可以有多种实现方式,一种典型实施例如下:
(1)对于互动信息、数字人信息对应的多个互动方式,计算互动方式与互动信息、数字人信息的语义相似度,语义相似度可以由词向量等技术实现,本发明对此不作限定,
(2)根据互动信息、数字人信息与互动方式的语义相似度以及互动方式的权重确定互动信息与互动方式的相似度,计算公式如下:sim=s×w。其中s是互动信息与互动方式的语义相似度,w是互动方式的权重,sim是互动信息与互动方式的相似度
(3)选择与互动信息、数字人信息相似度sim最大的互动方式作为与陪伴目标的互动方式;
在本方法的一个实施例中,S105所述根据所述陪伴人的数字人信息,生成所述互动方式对应的多个互动内容的得分包括,使用训练生成的模型,生成所述互动方式对应的多个互动内容的得分,其中所述模型以所述的数字人信息为输入,以所述互动方式对应 的多个互动内容上的得分大小作为输出。
具体的,生成模型的训练数据可以来源于公开数据或其他数据收集设备获取的数据。
在具体的实施例中,如下:
(1)机器人利用词向量方法计算互动方式数字人信息与互动信息的语义相似度如下,格式为“数字人信息,互动信息,语义相似度”:
讲故事,陪我说说话,0.7
唱儿歌,陪我说说话,0.8
按照公式(1),“讲故事”与“陪我说说话”的相似度为0.7*1=0.7,“唱儿歌”与“陪我说说话”的相似度为0.8*0.5=0.4,
机器人选择相似度最高的“讲故事”作为与陪伴目标的互动方式。
(2)生成互动内容的模型可以采用多种算法,例如逻辑回归、KNN、支持向量机等等,以KNN为例,KNN算法的原理是计算与测试样本距离最接近的K个样本并读取相应的标签,并将标签总数占样本总数的百分比作为测试样本在该标签下的得分。在本例中,测试样本是张三的数字人信息,标签是互动内容,我们取K=100,在获得了与张三的数字人信息最接近的100个数字人信息,统计发现他们在5个故事的选择上有15个选择了“小乌龟看爷爷”,20个选择“小猴子摘玉米”,25个选择了“小猫种鱼”,12个选择了“孔融让梨”,28个选择了“小壁虎借尾巴”,则张三在这五个故事上的得分如下:
小乌龟看爷爷 0.15,
小猴子摘玉米 0.20,
小猫种鱼 0.25,
孔融让梨 0.12,
小壁虎借尾巴 0.28
(3)通过主题分析算法LDA对五个故事进行分析,得到五个故事的主题以及权重如下:
小乌龟看爷爷:爱心(0.4)、尊老(0.6)
小猴子摘玉米:持之以恒(0.5)、一心一意(0.5)
小猫种鱼:科普(0.7),植物(0.3)
孔融让梨:礼貌(0.3),礼让(0.3),谦虚(0.4)
小壁虎借尾巴:科普(0.8),动物(0.2)
陪伴人的互动内容意图信息中只有一项“科普”,权重为1.0,
通过词向量方法计算“科普”与上述主题的语义相似度如下,格式为“互动内容意图信息,主题,语义相似度”:
科普,爱心,0.0
科普,尊老,0.2
科普,持之以恒,0.3
科普,一心一意,0.3
科普,科普,1.0
科普,植物,0.4
科普,礼貌,0.1
科普,礼让,0.1
科普,谦虚,0.4
科普,动物,0.6
设置当前行为意图权重a=0.5,然后按公式(2)计算修正后的五个故事的得分如下:
小乌龟看爷爷:s(小乌龟看爷爷)=0.15+0.5×(1.0×0.0×0.4+1.0×0.2×0.6)=0.21
小猴子摘玉米:s(小猴子摘玉米)=0.2+0.5×(1.0×0.3×0.5+1.0×0.3×0.5)=0.35
小猫种鱼:s(小猫种鱼)=0.25+0.5×(1.0×1.0×0.7+1.0×0.4×0.3)=0.66
孔融让梨:
s(孔融让梨)=0.12+0.5×(1.0×0.1×0.3+1.0×0.1×0.3+1.0×0.4×0.4)=0.23
小壁虎借尾巴:s(小壁虎借尾巴)=0.28+0.5×(1.0×0.8×1.0+1.0×0.2×0.6)=0.74
(4)由于“小壁虎借尾巴”的得分最高,机器人选择“小壁虎借尾巴”作为与陪伴目标的互动内容。
(5)机器人根据“讲故事”的互动方式和“小壁虎借尾巴”的互动内容,利用语音合成算法合成了“小壁虎借尾巴”的语音作为响应动作,通过扬声器播放该响应动作实现了对明明“陪我说说话”的行为反馈,同时很好地实现了张三“我认为科普教育很重要,是时候让孩子了解一些自然现象背后的原因了”的意图。
进一步,在本发明的一个实施例中,所述根据所述陪伴人的数字人信息、和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容的得分包括,使用所述最新行为信息修正所述多个互动内容的得分,从修正的所述互动内容得分中一个或多个得分最高的内容作为互动内容。
上述步骤可以有多种实现方式,一种典型的实现方式如下:
(1)对互动内容利用主题提取技术分析,得到所述多个互动内容的多个主题以及每个主题的得分。
(2)对数字人信息中的多个特征信息,计算每个数字人信息特征信息与多个互动内容的多个主题的语义相似度,语义相似度可以由词向量等方式计算
(3)按如下公式修改多个互动内容的得分,得到修正后的多个互动内容的得分:
Figure PCTCN2017097517-appb-000001
其中s是互动内容的得分,pi是第i项数字人信息特征信息,w(pi)表示pi的权重,tj是互动内容的第j个主题,s(tj)是tj的得分,sim(pi,tj)表示数字人特征信息与主题的语义相似度,snew表示互动内容的修正后的得分,a是当前行为意图的权重,a的值可以事先由陪伴人指定或者机器人随机生成,当需要当前行为意图对互动内容的影响较大时可以设置较大的a值,反之需要当前行为意图对互动内容的影响较小时可以设置较小的a值。
采集陪伴人的数字人信息后,还可以根据各种陪伴人信息的出现频率、出现场景对其进行排序,并根据重要程度对各项信息分配权重a;陪伴人的个人信息集合,还包括与之对应的权重,在陪伴过程中,若使用关键词对陪伴对象进行陪伴搜索,则可根据权重选择权重最大的一个或多个信息进行搜索。或者还可以基于场景,对信息集合的不同信息进行选择,比如陪伴陪伴对象游戏时,则选择信息集合中的价值观、兴趣作为关键词,而不考虑籍贯、年龄等其他信息。
在本方法的一个实施例中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息 是所述多个陪伴人的特征信息的加权求和,所述陪伴人特征信息的权重可预先设置或人工输入获得。
多陪伴人信息融合有多种实现方式,一种典型实现方式:为每个陪伴人信息设置相应权重,权重可以由多个陪伴人手动输入初始值或者由机器人设置初始值,例如机器人可以将每个陪伴人信息的权重设置为1/N,其中N为数字人的数目,也可以根据各个陪伴人的重要程度分别配置不同权重(比如多个陪伴人包含父母和爷爷奶奶,希望在孩子的陪伴影响中加大父母的影响,则可以加大父母陪伴人对应的权重,减小爷爷奶奶陪伴人对应的权重)。
S2按照多个数字人信息的权重计算加权数字人信息,公式如下:
Figure PCTCN2017097517-appb-000002
其中fk为加权数字人信息的第k项的取值,wi为第i个数字人信息的权重,fk,i为第i个数字人的第k项信息的取值。
若陪伴人包含多个陪伴人,则可对多个陪伴人的信息集合进行合并,生成多个对象的信息集合。多个对象的信息集合,也可包含信息和与信息对应的权重;信息权重与合并前各个对象的信息权重相关,也与多个对象各自的重要程度相关。
作为另一个可选实施例,如果一个家庭中,妈妈的语言天分更高,希望孩子受妈妈的语言影响更多,则可以在该机器人程序中将妈妈的数字人信息中的语言特征的权重设置高于其他数字人信息的权重,甚至可以将妈妈的语言特征的权重设置为1,机器人能够对妈妈的多个类型的信息进行加权,得到妈妈的加权数字人信息,并根据互动信息和妈妈的加权数字人信息,确定与该孩子进行互动的该互动方式或该互动内容并生成响应动作,通过该响应动作向该孩子进行行为反馈实现对该互动信息的响应,本发明实施例对此不作限定。
应理解,通过改变多项数字人信息中每项数字人信息的权重,能够改变机器人在对陪伴目标进行陪伴的过程中,该每项数字人信息对该陪伴目标的影响程度。
可选地,该多项数字人信息中每项数字人信息的权重可以为陪伴人预装程序时设定的,可以为通过陪伴人的互动请求发送给机器人的,或者可以为机器人根据陪伴人的某些设置自己学习决定的,本发明实施例对此不作限定。
例如,爸爸意识到了自己脾气太冲并将此信息输入机器人,则机器人在陪伴孩子时会削弱爸爸的此类行为。通过此实现方法,能够实现基于陪伴场景选择最优的陪伴动作。
作为另一个可选实施例,如果一个家庭在教育男孩子的方面期望孩子受爸爸影响多一些,则可以在该机器人程序中将妈妈的数字人信息的权重设置低于爸爸的数字人信息的权重,甚至可以将爸爸的数字人信息的权重设置为1,妈妈的数字人信息的权重设置为0,机器人能够对爸爸的数字人信息和妈妈的数字人信息进行加权,得到陪伴人的数字人信息,并根据互动信息和该陪伴人的数字人信息,确定与孩子进行互动的互动方式和互动内容中的至少一项,并生成响应动作,通过该响应动作向该孩子进行行为反馈实现对该互动信息的响应,本发明实施例对此不作限定。
在本方法的一个实施例中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是通过对所述多个陪伴人的特征信息机器学习获得。
具体的,使用机器学习的算法将多个陪伴人的数字人信息综合的步骤如下:
S1读取多个陪伴人的数字人信息;
S2根据余弦相似度计算公式计算任意两个陪伴人的相似度,
S3将陪伴人的数字人信息作为顶点,若两个陪伴人的数字人信息的相似度大于一定阈值则建立边,得到陪伴人的数字人信息图G,
S4对图G采用PageRank算法,得到每个顶点的PageRank值,
S5按照如下方式得到陪伴人的数字人信息:
Figure PCTCN2017097517-appb-000003
其中f是陪伴人的数字人信息的某一项信息,wi是第i个陪伴人的PageRank值,fi是第i个陪伴人在该项信息下的取值,N是陪伴人的数目。
在本方法的一个实施例中,所述方法的执行主体是伴随于陪伴目标周围的机器人,所述陪伴人的数字人信息是陪伴人随身携带的智能设备采集进而获得。如图4所示,图4提供了本发明方法各组成部分的关系图,执行过程中涉及的陪伴人410和陪伴对象420的关系,陪伴人410和陪伴对象420通过智能设备430和机器人440产生相互作用。智能设备430采集陪伴人的行数据,获得最新行为数据,智能设备430将最新行为数据发送给云服务器450,云服务器对行为数据计算分析获得陪伴人的数字人i信息,将所述数字人信息发送给机器人440。机器人根据自己采集获取的陪伴目标的互动信息基于互动信息和陪伴人的数字人信息确定与陪伴目标的互动方式和互动内容,实现与陪伴目标的互动。在实践中,根据机器人的计算能力,可以选择是否采用云服务器。可能的实施情况可以省略云服务器,由机器人直接通过行为数据计算分析获得数字人信息。机器人还可以根据机器人本体承载的传感器件直接获取陪伴人的行为数据。
图5示出了本发明实施例的机器人控制系统的示意性架构图,如图5所示,该控制系统包括至少一个陪伴人(图中示出了陪伴人510)、陪伴目标520、智能设备530、机器人540和云服务器550。
陪伴人510为期望能够经常在陪伴目标520身边,并且能够对待陪目标进行教育和影响的人,该陪伴人例如可以为陪伴目标的监护人、老师等。
智能设备530用于获取陪伴人510的行为信息,通过对该陪伴人的行为信息进行分析确定该陪伴人的最新行为信息,并将该最新行为生成数字人信息,发送至机器人540,以通过该数字人信息控制机器人与陪伴目标之间的互动。
可选地,该智能设备可以通过语义分析、机器学习算法、或者关键词匹配等方式从该陪伴人的行为信息中提取该陪伴人的数字人信息,本发明实施例对此不作限定。
可选地,该智能设备530可以为机器人540的远程设备,可以为专门与机器人配合的专用设备,还可以为安装了与机器人配合的程序的智能设备,例如可以为移动终端、可穿戴式智能设备或陪伴在陪伴人身边的机器人。
可选地,该智能设备530可以为数据收集设备,该数据收集设备可以直接接收该陪伴人通过语音输入、视频输入或键盘触摸屏输入等方式输入的行为信息,或者该智能设备530可以通过能够与该智能设备相互通信数据收集设备获得该陪伴人的行为信息,本发明实施例对此不作限定。
机器人540用于通过获取陪伴人的数字人信息,并获取互动信息,根据该互动信息和该交流特征信息,确定与陪伴目标进行互动的互动方式和互动内容中的至少一项并生成响应动作,通过该响应动作向该陪伴目标进行行为反馈实现对该互动信息的响应。
云服务器550用于对智能设备530和机器人540之间传输的信息进行转发或分析处理。
可选地,该机器人可以接收数据收集设备发送的陪伴人的数字人信息;可以通过数据收集设备获取陪伴人的行为信息,并对该陪伴人的行为信息进行分析得到该陪伴人的数字人信息;可以通过云服务器获取陪伴人的行为信息,该数字人信息是该云服务器通过对该陪伴人的行为信息分析确定;可以为陪伴人直接输入该数字人信息,还可以为从该机器人的存储器中获得的数据收集设备提前预存的数字人信息,本发明实施例对此不作限定。
通过数据收集设备或云服务器对该陪伴人的行为信息进行分析得到该陪伴人的数字人信息,能够减轻机器人的计算量和信息处理速度,提高了机器人的性能。
可选地,该数据收集设备可以为陪伴人随身携带的、可移动的智能设备,该智能设备可以为机器人的远程设备,可以为专门与机器人配合的专用设备,还可以为安装了与机器人配合的程序的智能设备,例如可以为手机、可穿戴式智能设备或陪伴在陪伴人身边的机器人,本发明实施例对此不作限定。
可选地,该数据收集设备可以通过传感器获取陪伴人的行为信息,例如可以接收该陪伴人通过语音输入、视频输入或键盘触摸屏输入等方式输入的行为信息,本发明实施例对此不作限定。
本发明实施例提供了一种机器人设备,如图6所示,图6提供了本发明一个实施例的结构图。所述机器人设备包括:所述设备包括信息获取模块601、互动方式生成模块603、响应模块607;所述信息获取模块601用于采集陪伴目标的互动信息,并获取陪伴人的数字人信息,所述互动信息包含所述陪伴目标对所述机器人的声音或动作互动信息,所述数字人信息包含数字化的各项陪伴人信息集合;所述互动方式生成模块603用于用于根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式,根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容。还可以采用机器学习算法,生成所述互动方式对应的多个互动内容的得分,从所述多个互动内容得分中选择一个或多个内容作为互动内容。响应模块607用于根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作。
在本发明的一个实施例中,信息获取模块601还用于获取当前时刻上一时间段内所述陪伴人的最新行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;所述互动方式生成模块603用于根据所述陪伴人的数字人信息、和所述最新行为信息,采用机器学习算法,生成所述互动方式对应的多个互动内容的得分。进一步,所述互动方式生成模块603还可以具体用于使用所述互动信息、所述数字人信息、所述最新行为信息,确定与所述陪伴目标的互动方式。
在本发明的一个实施例中,信息获取模块还用于获取当前时刻上一时间段内所述陪伴人的最新行为信息,通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息,所述数字人更新信息用于完善或刷新所述数字人信息,所述数字人信息通过对所述陪伴人的行为信息分析或人为输入方式确定。进一步,还可以使用附加权重的所述数字人更 新信息,叠加到所述数字人信息上,以更新信息完善或刷新所述数字人信息。在一种具体的实现中,所述附加权重数值可调整,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。
在本发明的一个实施例中,信息获取模块601还可以用于通过机器学习的算法将所述数字人更新信息,叠加到所述数字人信息。
在本发明的一个实施例中,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息;所述互动方式生成模块603用于计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。所述互动方式生成模块603还用于使用训练生成的模型,生成所述互动方式对应的多个互动内容的得分,其中所述模型以所述的数字人信息为输入,以所述互动方式对应的多个互动内容上的得分大小作为输出。
在本发明的一个实施例中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是所述多个陪伴人的特征信息的加权求和,所述陪伴人特征信息的权重可预先设置或人工输入获得。在一个更具体的实施例中,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是通过对所述多个陪伴人的特征信息机器学习获得。
在各个实施例中所述设备的执行主体是伴随于陪伴目标周围的机器人,所述陪伴人的数字人信息由所述陪伴人随身携带移动设备采集。上述的图6所示的机器人各个模块可以完成并执行各个方法实施例的流程步骤,具有方法实施例中所需要的各功能。
图7示出了本发明实施例的另一机器人设备700,该机器人700包括:处理器710、发送器720、接收器730、存储器740和总线系统750。机器人还应该有执行结构件,可以是机械装置,例如机械手臂,履带/轮式移动机械装置,另外还有显示器,麦克风、摄像头等于外界互动活动的组件,可以统称为执行组件。其中,处理器710、发送器720、接收器730和存储器740通过总线系统750相连,该存储器740用于存储指令,该处理器710用于执行该存储器740存储的指令,以控制该发送器720发送信号或控制该接收器730接收信号。发送器720和接收器730可以是通信接口,具体发送器720可以是用于接收数据或指令的接口,接收器730可以是用于发送数据或指令的接口,在此不再对发送器720和接收器730的具体形式进行举例说明。应理解,在本发明实施例中,该处理器可以是中央处理单元(英文:central processing unit,简称:CPU),该处理器还可以是其他通用处理器、数字信号处理器(英文:digital signal processing,简称:DSP)、专用集成电路ASIC、现成可编程门阵列(英文:field programmable gate array,简称:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。上述的图7所示的机器人各个器件在处理器的控制下,相互配合,可以完成并执行各个方法实施例的流程步骤,具有方法实施例中所需要的各功能。
机器人700可以用于执行上述方法实施例中与数据收集设备对应的各个步骤或流程。可选地,该存储器740可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器的一部分还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。该处理器710可以用于执行存储器中存储的指令,并且该处理器执行该指令时,可以执行上述方法实施例中与数据收集设备对应的各个步骤。在本发明实施 例中,该处理器可以是中央处理单元(英文:central processing unit,简称:CPU),该处理器还可以是其他通用处理器、数字信号处理器(英文:digital signal processing,简称:DSP)、专用集成电路ASIC、现成可编程门阵列(英文:field programmable gate array,简称:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。处理器能够承载或者实现上述的信息获取模块601、互动方式生成模块603,并且控制响应模块607。响应模块607应该可以是机器人的动作执行结构件。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器执行存储器中的指令,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:read-only memory,简称:ROM)、随机存取存储器(random access memory,简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (18)

  1. 一种机器人的控制方法,其特征在于,包括:
    采集陪伴目标的互动信息,并获取陪伴人的数字人信息,所述互动信息包含所述陪伴目标对机器人的声音或动作的交互信息,所述数字人信息包含数字化的所述陪伴人的信息集合;
    根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式;
    根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容;
    根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作。
  2. 根据权利要求1所述的控制方法,其特征在于,所述方法还包括:
    获取当前时刻前一时间段内所述陪伴人的行为信息;
    根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的所述互动内容包括:
    根据所述陪伴人的数字人信息和所述行为信息,采用机器学习算法,生成所述互动方式对应的多个可用互动内容,从所述多个可用互动内容中选择一个或多个所述互动内容。
  3. 根据权利要求1或2所述的控制方法,其特征在于,
    获取当前时刻前一时间段内所述陪伴人的行为信息;
    根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式还包括:使用所述互动信息、所述数字人信息和所述行为信息,确定与所述陪伴目标的互动方式。
  4. 根据权利要求1-3任一权利要求所述的控制方法,其特征在于,
    所述方法还包括:获取当前时刻前一时间段内所述陪伴人的行为信息;
    通过分析所述行为信息,获得所述陪伴人的数字人更新信息,所述数字人更新信息用于更新所述数字人信息。
  5. 根据权利要求4所述的控制方法,其特征在于,
    所述获取陪伴人的数字人信息之前,所述方法还包括:使用附加权重的所述数字人更新信息,叠加到所述数字人信息上,以更新信息修正所述数字人信息。
  6. 根据权利要求5所述的控制方法,其特征在于,还包括:通过调整所述附加权重数值,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。
  7. 根据权利要求1-6任一项所述的控制方法,其特征在于,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育理念信息、行为习惯信息;
    所述使用所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式包括:计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。
  8. 根据权利要求1-7任一项所述的控制方法,其特征在于,还包括:
    根据所述陪伴人的数字人信息,生成所述互动方式对应的多个互动内容的得分,根据 所述得分从所述多个可用互动内容中选择一个或多个所述互动内容。
  9. 根据权利要求8任一项所述的控制方法,其特征在于,根据所述陪伴人的数字人信息,生成所述互动方式对应的多个互动内容的得分包括,使用训练生成的模型,生成所述互动方式对应的多个互动内容的得分,其中所述模型以所述的数字人信息为输入,以所述互动方式对应的多个互动内容上的得分大小作为输出。
  10. 根据权利要求1-8任一项所述的控制方法,其特征在于,所述陪伴人包括多个陪伴人,所述陪伴人的数字人信息是所述多个陪伴人的特征信息的加权求和。
  11. 一种机器人设备,其特征在于,包括:所述设备包括信息获取模块、互动方式生成模块、互动内容生成模块、响应模块;
    所述信息获取模块,用于采集陪伴目标的互动信息,并获取陪伴人的数字人信息,所述互动信息包含所述陪伴目标的声音或动作的交互信息,所述数字人信息包含数字化的所述陪伴人的信息集合;
    所述互动方式生成模块,用于根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式,根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容;
    所述响应模块,根据所述互动方式和所述互动内容生成对所述陪伴目标的响应动作。
  12. 根据权利要求11所述的机器人设备,其特征在于,
    所述信息获取模块,还用于获取当前时刻前一时间段内所述陪伴人的行为信息;
    所述互动方式生成模块,具体用于根据所述互动信息和所述数字人信息,确定与所述陪伴目标的互动方式,根据所述陪伴人的数字人信息和所述行为信息,采用机器学习算法,生成所述互动方式对应的多个可用互动内容,从所述多个可用互动内容中选择一个或多个所述互动内容
  13. 根据权利要求11所述的机器人设备,其特征在于,
    所述信息获取模块,还用于获取当前时刻前一时间段内所述陪伴人的行为信息;
    所述互动方式生成模块,具体用于使用所述互动信息、所述数字人信息和所述行为信息,确定与所述陪伴目标的互动方式,根据所述陪伴人的数字人信息,采用机器学习算法,生成与所述互动方式对应的互动内容。
  14. 根据权利要求11-13任一项所述的机器人设备,其特征在于,
    所述信息获取模块,还用于获取当前时刻上一时间段内所述陪伴人的行为信息,所述陪伴人的行为信息由所述陪伴人随身携带移动设备采集;用于通过分析所述最新行为信息,获得所述陪伴人的数字人更新信息,所述数字人更新信息用于更新所述数字人信息,所述数字人信息通过对所述陪伴人的行为信息分析或人为输入方式确定。
  15. 根据权利要求14所述的机器人设备,其特征在于,
    所述信息获取模块,还用于使用附加权重的所述数字人更新信息,叠加到所述数字人信息上,以更新信息修正所述数字人信息。
  16. 根据权利要求15所述的机器人设备,其特征在于,所述信息获取模块,还用于调整所述附加权重数值,以加大或减小当前时刻上一时间段内所述陪伴人的行为信息对所述数字人信息的影响。
  17. 根据权利要求11-16任一项所述的机器人设备,其特征在于,所述数字人信息包括以下类型信息中的一项或多项:个人基本信息、个人经历信息、价值观信息、教育 理念信息、行为习惯信息;
    所述互动方式生成模块,具体用于计算所述数字人信息、互动信息与所述互动方式的语义相似度,选择语义相似度最大的互动方式作为与所述陪伴目标的互动方式。
  18. 根据权利要求11-17任一项所述的机器人设备,其特征在于,所述互动内容生成模块,还用于根据所述陪伴人的数字人信息,生成所述互动方式对应的多个互动内容的得分,根据所述得分从所述多个可用互动内容中选择一个或多个所述互动内容其中所述模型以所述的数字人信息为输入,以所述互动方式对应的多个互动内容上的得分大小作为输出。
PCT/CN2017/097517 2016-08-17 2017-08-15 机器人的控制方法及陪伴机器人 WO2018033066A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17841038.7A EP3493032A4 (en) 2016-08-17 2017-08-15 METHOD FOR CONTROLLING ROBOT AND ROBOT COMPAGNON
US16/276,576 US11511436B2 (en) 2016-08-17 2019-02-14 Robot control method and companion robot

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610681117 2016-08-17
CN201610681117.X 2016-08-17
CN201710306154.7 2017-05-04
CN201710306154.7A CN107784354B (zh) 2016-08-17 2017-05-04 机器人的控制方法及陪伴机器人

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/276,576 Continuation US11511436B2 (en) 2016-08-17 2019-02-14 Robot control method and companion robot

Publications (1)

Publication Number Publication Date
WO2018033066A1 true WO2018033066A1 (zh) 2018-02-22

Family

ID=61196423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097517 WO2018033066A1 (zh) 2016-08-17 2017-08-15 机器人的控制方法及陪伴机器人

Country Status (1)

Country Link
WO (1) WO2018033066A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108393898A (zh) * 2018-02-28 2018-08-14 上海乐愚智能科技有限公司 一种智能陪伴方法、装置、机器人及存储介质
CN108858220A (zh) * 2018-07-05 2018-11-23 安徽省弘诚软件开发有限公司 一种多动能机器人
CN109830304A (zh) * 2018-12-19 2019-05-31 中南大学湘雅三医院 基于亲情音视频的老人健康管理系统
US10826864B2 (en) 2018-07-27 2020-11-03 At&T Intellectual Property I, L.P. Artificially intelligent messaging
CN116975654A (zh) * 2023-08-22 2023-10-31 腾讯科技(深圳)有限公司 对象互动方法、装置、电子设备、存储介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795830A (zh) * 2007-09-06 2010-08-04 奥林巴斯株式会社 机器人控制系统、机器人、程序以及信息存储介质
CN104290097A (zh) * 2014-08-19 2015-01-21 白劲实 一种学习型智能家庭社交机器人系统和方法
CN105138710A (zh) * 2015-10-12 2015-12-09 金耀星 一种聊天代理系统及方法
CN105389461A (zh) * 2015-10-21 2016-03-09 胡习 一种交互式儿童自主管理系统及其管理方法
CN105832073A (zh) * 2016-03-22 2016-08-10 华中科技大学 一种智能交互的情感呵护抱枕机器人系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795830A (zh) * 2007-09-06 2010-08-04 奥林巴斯株式会社 机器人控制系统、机器人、程序以及信息存储介质
CN104290097A (zh) * 2014-08-19 2015-01-21 白劲实 一种学习型智能家庭社交机器人系统和方法
CN105138710A (zh) * 2015-10-12 2015-12-09 金耀星 一种聊天代理系统及方法
CN105389461A (zh) * 2015-10-21 2016-03-09 胡习 一种交互式儿童自主管理系统及其管理方法
CN105832073A (zh) * 2016-03-22 2016-08-10 华中科技大学 一种智能交互的情感呵护抱枕机器人系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3493032A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108393898A (zh) * 2018-02-28 2018-08-14 上海乐愚智能科技有限公司 一种智能陪伴方法、装置、机器人及存储介质
CN108858220A (zh) * 2018-07-05 2018-11-23 安徽省弘诚软件开发有限公司 一种多动能机器人
US10826864B2 (en) 2018-07-27 2020-11-03 At&T Intellectual Property I, L.P. Artificially intelligent messaging
CN109830304A (zh) * 2018-12-19 2019-05-31 中南大学湘雅三医院 基于亲情音视频的老人健康管理系统
CN116975654A (zh) * 2023-08-22 2023-10-31 腾讯科技(深圳)有限公司 对象互动方法、装置、电子设备、存储介质及程序产品
CN116975654B (zh) * 2023-08-22 2024-01-05 腾讯科技(深圳)有限公司 对象互动方法、装置、电子设备及存储介质

Similar Documents

Publication Publication Date Title
US11511436B2 (en) Robot control method and companion robot
WO2018033066A1 (zh) 机器人的控制方法及陪伴机器人
US11100384B2 (en) Intelligent device user interactions
US11908245B2 (en) Monitoring and analyzing body language with machine learning, using artificial intelligence systems for improving interaction between humans, and humans and robots
Iocchi et al. RoboCup@ Home: Analysis and results of evolving competitions for domestic and service robots
JP2018014094A (ja) 仮想ロボットのインタラクション方法、システム及びロボット
US11074491B2 (en) Emotionally intelligent companion device
CN114830139A (zh) 使用模型提供的候选动作训练模型
Ramakrishnan et al. Toward automated classroom observation: Multimodal machine learning to estimate class positive climate and negative climate
JP6076425B1 (ja) 対話インターフェース
US20230108256A1 (en) Conversational artificial intelligence system in a virtual reality space
CN111465949A (zh) 信息处理设备、信息处理方法和程序
CN116704085B (zh) 虚拟形象生成方法、装置、电子设备和存储介质
Zhong et al. On the gap between domestic robotic applications and computational intelligence
Li Artificial intelligence revolution: How AI will change our society, economy, and culture
Henderson et al. Development of an American Sign Language game for deaf children
KR20190118108A (ko) 전자 장치 및 그의 제어방법
JP2017091570A (ja) 対話インターフェース
Wan et al. Midoriko chatbot: LSTM-based emotional 3D avatar
JP7157239B2 (ja) 感情認識機械を定義するための方法及びシステム
CN111949773A (zh) 一种阅读设备、服务器以及数据处理的方法
Chen et al. Comparison studies on active cross-situational object-word learning using non-negative matrix factorization and latent dirichlet allocation
CN113301352B (zh) 在视频播放期间进行自动聊天
WO2022165109A1 (en) Methods and systems enabling natural language processing, understanding and generation
Gogineni et al. Gesture and speech recognizing helper bot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17841038

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017841038

Country of ref document: EP

Effective date: 20190228