CN109857929A - A kind of man-machine interaction method and device for intelligent robot - Google Patents

A kind of man-machine interaction method and device for intelligent robot Download PDF

Info

Publication number
CN109857929A
CN109857929A CN201811632185.2A CN201811632185A CN109857929A CN 109857929 A CN109857929 A CN 109857929A CN 201811632185 A CN201811632185 A CN 201811632185A CN 109857929 A CN109857929 A CN 109857929A
Authority
CN
China
Prior art keywords
information
user
active user
active
intelligent robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811632185.2A
Other languages
Chinese (zh)
Other versions
CN109857929B (en
Inventor
贾志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201811632185.2A priority Critical patent/CN109857929B/en
Publication of CN109857929A publication Critical patent/CN109857929A/en
Application granted granted Critical
Publication of CN109857929B publication Critical patent/CN109857929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of man-machine interaction method for intelligent robot, in the case where drawing this reading model, this method comprises: Step 1: obtaining the multi-modal interactive information about active user;Step 2: being judged whether to meet default active push condition according to multi-modal interactive information, if it is satisfied, then executing step 3;Step 3: drawing this recommendation information accordingly according to the generation of multi-modal interactive information and exporting.This method can initiatively judge whether to need to draw this story to user's push, so also the problem that user experience is bad caused by this image is read can be drawn to what is identified to avoid existing, it enables to intelligent robot expressively more lively and understands the actual demand of user, also just improves the user experience and user's viscosity of intelligent robot in this way.

Description

A kind of man-machine interaction method and device for intelligent robot
Technical field
The present invention relates to robotic technology fields, specifically, being related to a kind of human-computer interaction side for intelligent robot Method and device.
Background technique
With the continuous development of science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine Industrial circle is gradually walked out in the research of people, gradually extends to the neck such as medical treatment, health care, family, amusement and service industry Domain.And requirement of the people for robot also conform to the principle of simplicity single duplicate mechanical action be promoted to have anthropomorphic question and answer, independence and with The intelligent robot that other robot interacts, human-computer interaction also just become an important factor for determining intelligent robot development. Therefore, the interaction capabilities for promoting intelligent robot improve the class human nature and intelligence of robot, are the important of present urgent need to resolve Problem.
Summary of the invention
To solve the above problems, originally being read the present invention provides a kind of man-machine interaction method for intelligent robot drawing Under reading mode, which comprises
Step 1: obtaining the multi-modal interactive information about active user;
Step 2: judged whether to meet default active push condition according to the multi-modal interactive information, if it is satisfied, then Execute step 3;
Step 3: drawing this recommendation information accordingly according to the multi-modal interactive information generation and exporting.
According to one embodiment of present invention, in the step 2, if one draw it is default first after this reading Can not get that the active user inputted in duration it is new draw this information, then determine to meet default active push condition.
According to one embodiment of present invention, in the step 2, if detecting current page in drawing this reading process Identification repeatedly failure, then determine to meet the default active push condition.
According to one embodiment of present invention, if detecting current page identification repeatedly failure in drawing this reading process, The first query message is generated in the step 3 and obtains the feedback letter that the active user is directed to first query message Breath continues the reading for currently drawing this according to the feedback information or switching draws this.
According to one embodiment of present invention, in the step 2, if detecting the active user in current page When the duration of stop is more than the second preset duration, then determine to meet default active push condition.
According to one embodiment of present invention, in the step 2, according to the multi-modal interactive information determination The emotional information of active user, wherein if the emotional information of the active user belongs to default positive mood or preset negative Face mood then determines to meet default active push condition, thus raw according to the emotional information of the user in the step 3 This recommendation information is drawn at described.
According to one embodiment of present invention, in the step 2, according to the multi-modal interactive information determination The interaction of active user is intended to, and is intended to judge whether to meet default active push condition according to the interaction, if it is satisfied, then It is intended to generate according to the interaction in the step 3 and draws this recommendation information accordingly.
According to one embodiment of present invention, the method also includes:
Step 4: obtaining the active user for the feedback information for drawing this recommendation information, and according to the feedback letter Breath push accordingly draws this or continues current operation.
The present invention also provides a kind of program product, it is stored thereon with executable as above described in any item method and steps Program code.
The present invention also provides a kind of man-machine interactive systems towards intelligent robot, which is characterized in that the system dress Equipped with operating system, the operating system can load and execute program product as described above.
The present invention also provides a kind of human-computer interaction device for intelligent robot, described device includes:
Interactive information obtains module, is used to obtain the multi-modal interactive information about active user;
This recommendation information generation module is drawn, is used to judge whether to meet according to the multi-modal interactive information default active Pushing condition, if it is satisfied, then drawing this recommendation information accordingly according to the multi-modal interactive information generation and exporting.
According to one embodiment of present invention, if one draw this reading after can not be got in the first preset duration The active user inputted it is new draw this information, it is described draw this recommendation information generation module be then configured to determine to meet it is default Active push condition.
According to one embodiment of present invention, if detecting current page identification repeatedly failure, institute in drawing this reading process State draw this recommendation information generation module be then configured to determine meet the default active push condition.
According to one embodiment of present invention, if detecting current page identification repeatedly failure, institute in drawing this reading process It states and draws this recommendation information generation module and be then configured to generate the first query message and obtain the active user for described first The feedback information of query message continues the reading for currently drawing this according to the feedback information or switching draws this.
According to one embodiment of present invention, if detecting that the duration that the active user stops in current page is more than the When two preset durations, this recommendation information generation module of drawing then is configured to determine to meet default active push condition.
According to one embodiment of present invention, this recommendation information generation module of drawing is configured to according to the multi-modal friendship Mutual information determines the emotional information of the active user, wherein if the emotional information of the active user belongs to default front Mood or default negative emotions then determine to meet default active push condition, thus raw according to the emotional information of the user This recommendation information is drawn at described.
According to one embodiment of present invention, this recommendation information generation module of drawing is configured to according to the multi-modal friendship Mutual information determines that the interaction of the active user is intended to, and is intended to judge whether to meet default active push item according to the interaction Part draws this recommendation information if it is satisfied, then being intended to generate according to the interaction accordingly.
According to one embodiment of present invention, the method also includes:
This pushing module is drawn, is used to obtain the active user for the feedback information for drawing this recommendation information, and This is accordingly drawn according to feedback information push or continues current operation.
The present invention also provides a kind of children special-purpose smart machine, the equipment includes processor and storage device, In, the storage device is stored with program, and the program that the processor is used to execute in the storage device is such as taken up an official post with realizing Method described in one.
Man-machine interaction method provided by the present invention for intelligent robot can initiatively judge whether need to This story is drawn in family push, also this image of drawing identified can be read and be caused to avoid existing in this way The bad problem of user experience.This method determines user according to the accessed multi-modal interactive information about user Virtual condition, and then adjusted according to the virtual condition of user (such as flipbook situation, emotional state and interaction are intended to etc.) Intelligent robot draws this read state, so that intelligent robot is expressively more lively and understands the practical need of user It asks, also just improves the user experience and user's viscosity of intelligent robot in this way.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by specification, right Specifically noted structure is achieved and obtained in claim and attached drawing.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is required attached drawing in technical description to do simple introduction:
Fig. 1 is the implementation process signal of the man-machine interaction method according to an embodiment of the invention for intelligent robot Figure;
Fig. 2 to Fig. 7 is the realization stream of the man-machine interaction method for intelligent robot of different embodiments according to the present invention Journey schematic diagram;
Fig. 8 is the structural schematic diagram of the human-computer interaction device according to an embodiment of the invention for intelligent robot;
Fig. 9 is the identical implementation process signal of the human-computer interaction according to an embodiment of the invention for intelligent robot Figure.
Specific embodiment
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and examples, how to apply to the present invention whereby Technological means solves technical problem, and the realization process for reaching technical effect can fully understand and implement.It needs to illustrate As long as not constituting conflict, each feature in each embodiment and each embodiment in the present invention can be combined with each other, It is within the scope of the present invention to be formed by technical solution.
Meanwhile in the following description, for illustrative purposes and numerous specific details are set forth, to provide to of the invention real Apply the thorough understanding of example.It will be apparent, however, to one skilled in the art, that the present invention can not have to tool here Body details or described ad hoc fashion are implemented.
In addition, step shown in the flowchart of the accompanying drawings can be in the department of computer science of such as a group of computer-executable instructions It is executed in system, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein Sequence execute shown or described step.
With the development of artificial intelligence, more and more draws this reading machine people and push children's early education market to, pass through identification The contents such as the object on card, such as the various shape vehicles, musical instrument, animals and plants carry out voice broadcast or screen is shown, To promote the cognitive ability of children.
It draws this reading machine people however, existing and this image of drawing identified can only be read, it can not root Itself is adjusted according to the virtual condition of user draws this read state, shows excessively slow-witted so that drawing this reading machine people Plate is unfavorable for drawing the popularization and use of this reading machine people.
For the problems of in the prior art, the present invention provides a kind of new man-machine friendships for intelligent robot Mutual method, this method can be realized to active push originally is drawn in human-computer interaction process, to improve the user experience of equipment.
Fig. 1 shows the implementation process schematic diagram of the man-machine interaction method for being used for intelligent robot.
As shown in Figure 1, this method can obtain the multimode about active user in step s101 first in implementation process State interactive information.It should be pointed out that in different embodiments of the invention, according to different actual conditions, drawing this reading Under mode, the accessed multi-modal interactive information about user may include different reasonable shapes to this method in step s101 The interactive information of formula.For example, in one embodiment of the invention, this method is accessed about user in step s101 Multi-modal interactive information either image information or voice messaging comprising user, be also possible to comprising drawing this state Image information is also possible to the information transmitted by Internet of Things mode from mobile client.
After obtaining about the multi-modal interactive information of active user, this method can be in step s 102 according to step S101 In accessed above-mentioned multi-modal interactive information to determine whether meeting default active push condition.Wherein, if it is above-mentioned more Mode interactive information characterizes current exchange scenario and meets default active push condition, then this method will root in step s 103 This recommendation information is drawn accordingly simultaneously according to the above-mentioned multi-modal interactive signal about user accessed in step S101 to generate It exports to active user.
As shown in Figure 1, optionally, this method, can be in step after in drawing of will generating, this recommendation information exported to active user Active user is constantly obtained in rapid S104 for the above-mentioned feedback information drawing this recommendation information and being inputted.Obtaining above-mentioned feedback After information, this method can be pushed according to the feedback information in step s105 draws this accordingly or continues current operation.
For example, if this recommendation information of drawing that user exports this method in step s 103 is lost interest in, it should Method, which will likely can not get effective feedback information in step S104 or get characterization user's expectation, to be continued The information of this reading is currently drawn, this method will also continue current operation in step s105 at this time.
And if this recommendation information is drawn in the active that user exports this method in step s 103, (such as " wilful is small Lion ") it is interested if, then this method then will acquire in step S104 positive feedback information (such as user input " uh, I wants to listen this story " or " say fastly fastly say " etc.), this method will will also correspond in step s105 and be somebody's turn to do at this time It draws drawing for this recommendation information and is originally pushed to active user.
It should be pointed out that in some embodiments of the invention, this method is accessed current in step S104 The feedback information of user is also likely to be such as " I wants to listen wilful lionet ", then this method then can in step s105 at this time What is pushed required for being redefined according to the above-mentioned feedback information of active user draws this (i.e. story " wilful lionet "), and It this is drawn into this content exports and give active user (such as playing story voice etc. that " wilful lionet " draws this to active user).
It should be pointed out that after intelligent robot booting, this method can equally determine to meet at this time pre- in the present embodiment If active active push condition, to draw this accordingly according to initiatively generating according to accessed multi-modal interactive information Recommendation information simultaneously exports.
Realization in order to clearly illustrate the man-machine interaction method provided by the present invention for intelligent robot is former Reason realizes process and advantage, is further elaborated below in conjunction with different embodiments to this method.
Embodiment one:
Fig. 2 shows the implementation process signals for the man-machine interaction method that intelligent robot is used for provided by the present embodiment Figure.
As shown in Fig. 2, after currently drawing this reading, this method can be in step in implementation process in the present embodiment It is obtained described in multi-modal interactive information about active user, principle and process and above-mentioned steps S101 in S201 It is similar, therefore no longer the related content of step S201 is repeated herein.
After obtaining about the multi-modal interactive information of active user, in the present embodiment, this method can be in step S202 Judge whether active user originally reads in current draw according to the multi-modal interactive information that accessed active user is inputted It is got in the first preset duration after finishing and new draws this information " such as draw this cover or new draw page in this ".
Specifically, in the present embodiment, after currently drawing this reading, this method preferably starts timing and continues to obtain It draws this cover or draws page identification image in this.
It should be pointed out that the specific value of above-mentioned first preset duration can be configured to different conjunctions according to actual needs Reason value, the present invention are not defined the specific value of above-mentioned first preset duration.
In the present embodiment, if also had not been obtained when timing duration reaches the first preset duration it is new draw this information, should Method can then determine to meet default active push condition at this time, thereby executing step S203 to step S205.Wherein, above-mentioned steps The realization principle and realization process of S203 to step S205 is similar with content documented by above-mentioned steps S103 to step S105, Therefore no longer the particular content of step S203 to step S205 are repeated herein.
Embodiment two:
Fig. 3 shows the implementation process signal provided by the present embodiment for the man-machine interaction method of intelligent robot Figure.
As shown in figure 3, this method can obtain the multi-modal interaction about active user in step S301 in the present embodiment Information.Specifically, in the present embodiment, this method above-mentioned multi-modal interactive information accessed in step S301 is preferably wrapped It includes and draws this image information.
After obtaining multi-modal interactive information, this method can be examined according to information after above-mentioned multi-modal collection in step s 302 Whether survey, which is able to detect that, is effectively drawn page in this.Wherein, it is repeatedly lost if detecting current page identification in drawing this reading process It loses, then this method will also determine to meet default active push condition at this time.
For example, if active user in continuous flipbook, due to draw originally will not it is a certain draw this page and stop enough when Long, this method draws this image and also can not just identify current page according to accessed, so as to cause for the current page for drawing this Recognition failures.When the number for drawing this current page recognition failures reaches predetermined number of times, (predetermined number of times can match according to actual needs It is set to different reasonable values, herein without limiting), party's rule can determine that meeting default active at this time pushes away in step s 302 Send condition.
As shown in figure 3, if meeting default active push condition, this method will be in step S303 in the present embodiment This recommendation information is drawn accordingly according to accessed multi-modal interactive information generation and is exported.
Preferably, this method lasting obtain can be worked as in step s 304 after drawing this recommendation information to active user's output Preceding user is directed to the above-mentioned feedback information for drawing this recommendation information, and is pushed accordingly in step S305 according to above-mentioned feedback information Current operation can be to continue with to active user by drawing this.
In the present embodiment, the realization principle and realization process and above-mentioned steps S103 of above-mentioned steps S303 to step S305 Related content to step S105 is similar, therefore no longer goes to live in the household of one's in-laws on getting married herein to the particular content of above-mentioned steps S303 to step S305 It states.
Embodiment three:
Fig. 4 shows the implementation process signal provided by the present embodiment for the man-machine interaction method of intelligent robot Figure.
As shown in figure 4, method provided by the present embodiment can obtain in step S401 about the multi-modal of active user Interactive information.Specifically, in the present embodiment, this method above-mentioned multi-modal interactive information accessed in step S401 is preferred Ground includes drawing this image information.
After obtaining multi-modal interactive information, this method can be examined in step S402 according to above-mentioned multi-modal interactive information Whether survey, which is able to detect that, is effectively drawn this page, alternatively, this last page is drawn in detection in drawing this reading process, i.e. current page is to draw this Last page.Wherein, if detecting current page identification in drawing this reading process, repeatedly failure, this method will also be sentenced It is fixed to meet default active push condition at this time.
Wherein, the specific implementation principle of above-mentioned steps S401 and step S402 and realize process and above-mentioned steps S301 and Step S302 disclosure of that is similar, therefore no longer repeats herein above-mentioned steps S401 and step S402.
As shown in figure 4, if meeting default active push condition, then indicating that active user very may be used in the present embodiment It can not like and currently draw this, therefore this method will generate in step S403 and export the first query message.Then, this method The feedback information that active user is directed to above-mentioned first query message can be constantly obtained in step s 404, and in step S405 Continue the reading for currently drawing this according to above-mentioned feedback information or switching draws this.
For example, if detecting current page identification repeatedly failure in drawing this reading process, this method meeting and active user into Row voice interface exports the voice messaging of such as " you do not like this this book " to user.Active user is directed to upper predicate Message breath can input corresponding feedback information.Wherein, if the feedback information that active user is inputted is the language of " I does not like " When message ceases, then party's rule, which can switch, draws this, so that pushing other to active user draws this;And if active user institute is defeated When the feedback information entered is the voice messaging of " I likes ", then party's rule will continue to currently draw this reading.
Example IV:
Fig. 5 shows the implementation process signal provided by the present embodiment for the man-machine interaction method of intelligent robot Figure.
As shown in figure 5, the man-machine interaction method of user's intelligent robot provided by the present embodiment can be in step S501 Obtain the multi-modal interactive information about active user.Specifically, in the present embodiment, this method is accessed in step S501 Above-mentioned multi-modal interactive information preferably include and draw this image information.
Then, this method can judge in step S502 according to multi-modal interactive information accessed in step S501 Stay time of the active user in current page.Wherein, if active user is when the stay time of current page is more than second default It is long, then also meaning that active user is likely to originally lose interest in for currently drawing.Therefore, this method will be in step at this time Determine to meet default active push condition at this time in S503, thus according to multi-modal interactive information accessed in step S501 To generate this recommendation information is drawn accordingly and export.
It should be pointed out that the present invention is not defined the specific value of above-mentioned second preset duration, in the present invention Different embodiments in, above-mentioned second preset duration can be configured to different reasonable values according to actual needs.
Certainly, in other embodiments of the invention, according to actual needs, this method can also draw this recommendation in output After breath, active user can also be obtained and be directed to the feedback information for drawing information of originally making a suggestion, and pushed accordingly according to the feedback information Draw this or continue current operation.Wherein, this method obtain feedback information and according to feedback information push accordingly draw this or Principle and the process for being to continue with current operation are similar with the content that above-mentioned steps S104 and step S105 are illustrated, therefore herein not This is repeated again.
Embodiment five:
Fig. 6 shows the implementation process signal provided by the present embodiment for the man-machine interaction method of intelligent robot Figure.
As shown in fig. 6, the man-machine interaction method of user's intelligent robot can be in step s 601 provided by the present embodiment Obtain the multi-modal interactive information about active user.In the present embodiment, this method accessed multimode in step s 601 State interactive information preferably includes the image information and/or voice messaging about active user.
This method can preferably determine the mood of active user in step S602 according to above-mentioned multi-modal interactive information Information.In the present embodiment, optionally, this method can be determined by way of recognition of face according to the image information of user The face location of active user out to further identify the emotional information that face is characterized, and then obtains active user's Emotional information.
Certainly, in other embodiments of the invention, according to the actual situation, this method can also use other rational methods Determine the emotional information of active user, the invention is not limited thereto.For example, in one embodiment of the invention, this method is also The emotional information to determine active user can be identified by the voiceprint to user, or is using number of ways knot The mode of conjunction determines the emotional information of active user.
As shown in fig. 6, after obtaining the emotional information of active user, this method can in step S603 judgment step S602 In the emotional information of obtained active user whether belong to default positive mood or default negative emotions.
Wherein, if the emotional information of active user belongs to default front emotional information or default negative information, the party Rule can determine to meet default active push condition in step s 604, and then draw this according to the generation of the emotional information of user Recommendation information.
For example, party rule meeting more happy or sadder when reading certain one page if recognizing user This recommendation information is drawn accordingly to active user's push by way of active push.
Embodiment six:
Fig. 7 shows the implementation process signal provided by the present embodiment for the man-machine interaction method of intelligent robot Figure.
As shown in fig. 7, the man-machine interaction method of user's intelligent robot can be in step s 701 provided by the present embodiment Obtain the multi-modal interactive information about active user.Then, this method can be in step S702 according to above-mentioned multi-modal interaction Information come determine active user interaction be intended to.
In the present embodiment, this method can preferably determine that the interaction of user is intended to using default intention map.Interaction Intention can be considered as robot and attempt to understand in human-computer interaction process with itself visual angle, and user is under certain theme or topic It is expected that reaching the plan of certain purpose.As interaction topic involved in content it is relatively broad, this method also just need with It is intended to map to excavate and determine information (i.e. robot need that in subsequent human-computer interaction process user needs to obtain from robot It will be to the information of user feedback).
Specifically, in the present embodiment, when determining that the interaction of user is intended to according to interaction topic, this method first can be It is default to be intended to determine node corresponding to above-mentioned interactive topic in map, then it is intended to determine in map with above-mentioned default Node corresponding to interaction topic is node (i.e. terminal node) corresponding to the line of start node, thus according to terminal node To determine that user's interaction is intended to.
Since there may be terminal nodes multiple, that this method is determined for the node that is connect with start node It may be multiple.In response to this, in the present embodiment, this method first can be according to the multiple sections being connected with start node Point determines that multiple candidate users are intended to, and is then intended to carry out confidence level sequence to these candidate users, and according to ranking results To determine that required user is intended to.
Specifically, in the present embodiment, this method is according to the default weight for being intended to each node line in picture come to these Candidate user intention is ranked up, and the maximum candidate user of weight selection is intended to be intended to as final required user.
Certainly, in other embodiments of the invention, this method can also be using other rational methods come according to multi-modal Input information determines that user is intended to, and the invention is not limited thereto.
Then, this method can judge that above-mentioned interaction is intended to judge whether to meet default active push item in step S703 Part.If it is satisfied, party's rule can preferably be intended in step S704 according to above-mentioned interaction to generate and draw this recommendation accordingly Information.
For example, when user has input the interactive voice information of such as " you can read XX book ", this method to intelligent robot Determine that the interaction of active user is intended to by interacting intention assessment to above-mentioned interactive voice information.Subsequently, based on upper Interactive intention is stated, this method, which can inquire, draws this relevant knowledge map, to determine whether that the related content of XX book can be pushed.Its In, if it is possible to the related content of XX book is pushed, then party's rule can generate corresponding instruction information and export, to prompt Active user takes this this book, and then the related content of this this book is told about to user, to realize the push for drawing this.
It should be pointed out that in other embodiments of the invention, this method can also be by will be in above-described embodiment Some or certain it is several be combined to obtain new man-machine interaction method, to realize the active push for drawing this to story.
It is described in computer systems since the present embodiment is provided for the man-machine interaction method of intelligent robot It realizes.The computer system for example can be set in the control core processor of robot.For example, method described herein It can be implemented as that software can be performed with control logic, executed by the CPU in robot operating system.It is as described herein Function can be implemented as being stored in the program instruction set in non-transitory visible computer readable medium.
When implemented in this fashion, which includes one group of instruction, when group instruction is run by computer It promotes computer to execute the method that can implement above-mentioned function.Programmable logic can temporarily or permanently be mounted on non-transitory In visible computer readable medium, such as ROM chip, computer storage, disk or other storage mediums.In addition to With software come except realizing, logic as described herein can using discrete parts, integrated circuit, with programmable logic device (such as, Field programmable gate array (FPGA) or microprocessor) programmable logic that is used in combination, or including their any combination Any other equipment embodies.All such embodiments are intended to fall within the scope of the present invention.
As can be seen that the man-machine interaction method provided by the present invention for intelligent robot can be led from foregoing description Judge whether to need to draw this story to user's push dynamicly, can also be drawn in this way to what is identified to avoid existing The bad problem of user experience caused by this image is read.This method is according to accessed about the multi-modal of user Interactive information determines the virtual condition of user, so according to the virtual condition of user (such as flipbook situation, emotional state with And interaction is intended to etc.) come adjust intelligent robot this read state is drawn, so that intelligent robot is expressively more lively And understand the actual demand of user, the user experience and user's viscosity of intelligent robot are also just improved in this way.
Meanwhile the present invention also provides a kind of human-computer interaction devices for intelligent robot.Wherein, Fig. 8 shows this The structural schematic diagram of the human-computer interaction device in embodiment.
As shown in figure 8, human-computer interaction device provided by the present embodiment preferably includes: interactive information acquisition module 801, It draws this recommendation information generation module 802 and draws this pushing module 803.Wherein, interactive information obtains module 801 and closes for obtaining In the multi-modal interactive information of active user,
It draws this recommendation information generation module 802 to connect with interactive information acquisition module 801, can be obtained according to interactive information Modulus block 801 transmits the multi-modal interactive information come to determine whether meeting default active push condition.Wherein, if met Default active push condition, corresponding draw originally can then be generated according to multi-modal interactive information by drawing this recommendation information generation module 802 Recommendation information simultaneously exports.
In the present embodiment, this pushing module 803 is drawn as apolegamy building, is connected with this recommendation information generation module 802 is drawn It connects.Active user can be obtained for the feedback information for drawing this recommendation information by drawing this pushing module 803, and according to the feedback Information push accordingly draws this or continues current operation.
It should be pointed out that the present embodiment in, interactive information obtain module 801, draw this recommendation information generation module 802 with And it draws this pushing module 803 and realizes its respectively disclosed in the principle of function and process and above-mentioned steps S101 to step S105 Content is similar, therefore no longer obtains module 801 to interactive information herein, draws this recommendation information generation module 802 and draw this push The particular content of module 803 is repeated.
The present invention also provides a kind of children special-purpose smart machine, which includes processor and deposits Storage device, wherein storage device is stored with program, and processor is then able to carry out the program in storage device, as above to realize Described in any item methods.
The present invention also provides a kind of program product, which is stored with program code, and the code is by operating system It can be realized the man-machine interaction method as described above for being used for intelligent robot when execution.In addition, the present invention also provides one kind Man-machine interactive system towards intelligent robot, the system are equipped with operating system, which can load and execute such as On program product.
Specifically, it as shown in figure 9, in the present embodiment, is somebody's turn to do the man-machine interactive system children special-purpose towards intelligent robot and sets Standby 901 and cloud server 902.Children special-purpose equipment 901 can ordinatedly be executed with cloud server 902 can be realized before The program code of the man-machine interaction method towards intelligent robot is stated, and then draws this story accordingly to user's push.
Specifically, in the present embodiment, children special-purpose equipment 901 is used to obtain the multi-modal interaction letter about active user Breath, and the multi-modal interactive information is transmitted to cloud server 902.For example, children special-purpose equipment 901 can be drawn by obtaining This 904 associated picture determines the read state of active user 903, meanwhile, children special-purpose equipment 901 can also pass through acquisition It draws this 904 associated picture and draws this content information accordingly to obtain, and draw according to this that this content information generates and output phase is answered Draw this voice messaging, thus realize draw this read function.
Cloud server 902 then can be transmitted according to children special-purpose equipment 901 come multi-modal interactive information judge Whether satisfaction presets active push condition.If meeting default active push condition, cloud server 902 ifs, can be according to described more The generation of mode interactive information, which draws this recommendation information accordingly and this is drawn this recommendation information, is transmitted to children special-purpose equipment 901.Youngster Virgin special equipment 901 receives above-mentioned after drawing this recommendation information, this can be drawn to this recommendation information and exported to active user 903.
In the present embodiment, the man-machine interactive system is using the powerful data-handling capacity of cloud server 902 come rapidly Determine whether to meet default active push condition and generate to draw this recommendation information accordingly, can also reduce so special to children With the requirement of the data-handling capacity of equipment 901, interactive efficiency can not only be improved, additionally it is possible to effectively reduce children special-purpose equipment 901 volume and cost.
It should be pointed out that in different embodiments of the invention, optionally, for the man-machine interactive system, also Partial data processing function can be transferred to children special-purpose equipment 901 realize, the invention is not limited thereto.
In different embodiments of the invention, above-mentioned children special-purpose equipment 901 can be include support perception, control etc. it is defeated Enter the smart machine of output module, such as tablet computer, robot, mobile phone, Story machine or draw this reading machine people, can give small The problem of friend tells a story, and Real-time Answer child proposes and has performance power abundant.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein or processing step Suddenly, the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also be understood that It is that term as used herein is used only for the purpose of describing specific embodiments, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structure Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs Apply example " or " embodiment " the same embodiment might not be referred both to.
Although above-mentioned example is used to illustrate principle of the present invention in one or more application, for the technology of this field For personnel, without departing from the principles and ideas of the present invention, hence it is evident that can in form, the details of usage and implementation It is upper that various modifications may be made and does not have to make the creative labor.Therefore, the present invention is defined by the appended claims.

Claims (12)

1. a kind of man-machine interaction method for intelligent robot, which is characterized in that in the case where drawing this reading model, the method packet It includes:
Step 1: obtaining the multi-modal interactive information about active user;
Step 2: being judged whether to meet default active push condition according to the multi-modal interactive information, if it is satisfied, then executing Step 3;
Step 3: drawing this recommendation information accordingly according to the multi-modal interactive information generation and exporting.
2. man-machine interaction method as described in claim 1, which is characterized in that in the step 2, if one draws this reading After can not get that the active user inputted in the first preset duration it is new draw this information, then determine to meet pre- If active push condition.
3. man-machine interaction method as claimed in claim 1 or 2, which is characterized in that in the step 2, if originally read drawing Current page identification repeatedly failure is detected in read procedure, then determines to meet the default active push condition.
4. man-machine interaction method as described in claim 1, which is characterized in that if detecting current page in drawing this reading process Identification repeatedly failure then generates the first query message in the step 3 and obtains the active user and asks for described first The feedback information for asking information continues the reading for currently drawing this according to the feedback information or switching draws this.
5. method as described in any one of claims 1 to 4, which is characterized in that in the step 2, if detecting institute Active user is stated when the duration that current page stops is more than the second preset duration, then determines to meet default active push condition.
6. such as method according to any one of claims 1 to 5, which is characterized in that in the step 2, according to the multimode State interactive information determines the emotional information of the active user, wherein if the emotional information of the active user belong to it is default Positive mood or default negative emotions then determine to meet default active push condition, thus according to institute in the step 3 It states and draws this recommendation information described in the emotional information generation of user.
7. such as method according to any one of claims 1 to 6, which is characterized in that in the step 2, according to the multimode State interactive information determines that the interaction of the active user is intended to, and is intended to judge whether to meet to preset actively to push away according to the interaction Condition is sent, draws this recommendation information accordingly if it is satisfied, then being intended to generate according to the interaction in the step 3.
8. such as method according to any one of claims 1 to 7, which is characterized in that the method also includes:
The feedback information of this recommendation information is drawn for described Step 4: obtaining the active user, and is pushed away according to the feedback information It send and accordingly draws this or continue current operation.
9. a kind of program product is stored thereon with the executable program such as method and step according to any one of claims 1 to 8 Code.
10. a kind of man-machine interactive system towards intelligent robot, which is characterized in that the system is equipped with operating system, institute Stating operating system can load and execute program product as claimed in claim 9.
11. a kind of human-computer interaction device for intelligent robot, which is characterized in that described device includes:
Interactive information obtains module, is used to obtain the multi-modal interactive information about active user;
This recommendation information generation module is drawn, is used to judge whether to meet default active push according to the multi-modal interactive information Condition, if it is satisfied, then drawing this recommendation information accordingly according to the multi-modal interactive information generation and exporting.
12. a kind of children special-purpose smart machine, which is characterized in that the equipment includes processor and storage device, wherein institute It states storage device and is stored with program, the processor is used to execute the program in the storage device to realize such as claim 1 Method described in any one of~8.
CN201811632185.2A 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device Active CN109857929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811632185.2A CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811632185.2A CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Publications (2)

Publication Number Publication Date
CN109857929A true CN109857929A (en) 2019-06-07
CN109857929B CN109857929B (en) 2021-06-15

Family

ID=66893117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811632185.2A Active CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Country Status (1)

Country Link
CN (1) CN109857929B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929143A (en) * 2019-10-12 2020-03-27 安徽奇智科技有限公司 Method and system for identifying picture book and electronic equipment
CN110941774A (en) * 2019-12-05 2020-03-31 深圳前海达闼云端智能科技有限公司 Service recommendation method
CN111028290A (en) * 2019-11-26 2020-04-17 北京光年无限科技有限公司 Graph processing method and device for picture book reading robot
CN111723653A (en) * 2020-05-12 2020-09-29 北京光年无限科技有限公司 Drawing book reading method and device based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060257830A1 (en) * 2005-05-13 2006-11-16 Chyi-Yeu Lin Spelling robot
CN102830902A (en) * 2012-06-29 2012-12-19 宇龙计算机通信科技(深圳)有限公司 Method and system for automatically scrolling page
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN106598241A (en) * 2016-12-06 2017-04-26 北京光年无限科技有限公司 Interactive data processing method and device for intelligent robot
CN107506377A (en) * 2017-07-20 2017-12-22 南开大学 This generation system is painted in interaction based on commending system
CN107783650A (en) * 2017-09-18 2018-03-09 北京光年无限科技有限公司 A kind of man-machine interaction method and device based on virtual robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060257830A1 (en) * 2005-05-13 2006-11-16 Chyi-Yeu Lin Spelling robot
CN102830902A (en) * 2012-06-29 2012-12-19 宇龙计算机通信科技(深圳)有限公司 Method and system for automatically scrolling page
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN106598241A (en) * 2016-12-06 2017-04-26 北京光年无限科技有限公司 Interactive data processing method and device for intelligent robot
CN107506377A (en) * 2017-07-20 2017-12-22 南开大学 This generation system is painted in interaction based on commending system
CN107783650A (en) * 2017-09-18 2018-03-09 北京光年无限科技有限公司 A kind of man-machine interaction method and device based on virtual robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929143A (en) * 2019-10-12 2020-03-27 安徽奇智科技有限公司 Method and system for identifying picture book and electronic equipment
CN111028290A (en) * 2019-11-26 2020-04-17 北京光年无限科技有限公司 Graph processing method and device for picture book reading robot
CN111028290B (en) * 2019-11-26 2024-03-08 北京光年无限科技有限公司 Graphic processing method and device for drawing book reading robot
CN110941774A (en) * 2019-12-05 2020-03-31 深圳前海达闼云端智能科技有限公司 Service recommendation method
CN111723653A (en) * 2020-05-12 2020-09-29 北京光年无限科技有限公司 Drawing book reading method and device based on artificial intelligence
CN111723653B (en) * 2020-05-12 2023-09-26 北京光年无限科技有限公司 Method and device for reading drawing book based on artificial intelligence

Also Published As

Publication number Publication date
CN109857929B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN109857929A (en) A kind of man-machine interaction method and device for intelligent robot
CN108000526B (en) Dialogue interaction method and system for intelligent robot
CN107894833B (en) Multi-modal interaction processing method and system based on virtual human
US20190187782A1 (en) Method of implementing virtual reality system, and virtual reality device
CN107728780B (en) Human-computer interaction method and device based on virtual robot
CN108108340B (en) Dialogue interaction method and system for intelligent robot
CN107278302B (en) Robot interaction method and interaction robot
CN105740948B (en) A kind of exchange method and device towards intelligent robot
KR102558437B1 (en) Method For Processing of Question and answer and electronic device supporting the same
CN107340865A (en) Multi-modal virtual robot exchange method and system
CN108664472B (en) Natural language processing method, device and equipment
CN109176535B (en) Interaction method and system based on intelligent robot
CN107632706B (en) Application data processing method and system of multi-modal virtual human
CN107704169B (en) Virtual human state management method and system
CN105843118B (en) A kind of robot interactive method and robot system
CN105446491B (en) A kind of exchange method and device based on intelligent robot
CN108460324A (en) A method of child's mood for identification
CN109858391A (en) It is a kind of for drawing the man-machine interaction method and device of robot
CN111125657B (en) Control method and device for student to use electronic equipment and electronic equipment
CN109543578A (en) Smart machine control method, device and storage medium
CN107103906A (en) It is a kind of to wake up method, smart machine and medium that smart machine carries out speech recognition
CN106126636B (en) A kind of man-machine interaction method and device towards intelligent robot
CN112739507B (en) Interactive communication realization method, device and storage medium
CN113703585A (en) Interaction method, interaction device, electronic equipment and storage medium
CN108388399B (en) Virtual idol state management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant