CN109857929B - Intelligent robot-oriented man-machine interaction method and device - Google Patents

Intelligent robot-oriented man-machine interaction method and device Download PDF

Info

Publication number
CN109857929B
CN109857929B CN201811632185.2A CN201811632185A CN109857929B CN 109857929 B CN109857929 B CN 109857929B CN 201811632185 A CN201811632185 A CN 201811632185A CN 109857929 B CN109857929 B CN 109857929B
Authority
CN
China
Prior art keywords
information
user
interaction
preset
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811632185.2A
Other languages
Chinese (zh)
Other versions
CN109857929A (en
Inventor
贾志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201811632185.2A priority Critical patent/CN109857929B/en
Publication of CN109857929A publication Critical patent/CN109857929A/en
Application granted granted Critical
Publication of CN109857929B publication Critical patent/CN109857929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A man-machine interaction method for an intelligent robot comprises the following steps in a picture book reading mode: step one, obtaining multi-mode interaction information about a current user; step two, judging whether a preset active pushing condition is met or not according to the multi-mode interaction information, and if so, executing step three; and step three, generating and outputting corresponding sketch recommendation information according to the multi-mode interaction information. The method can actively judge whether the picture story needs to be pushed to the user, so that the problem of poor user experience caused by the fact that the existing picture story only can be read by the recognized picture story can be avoided, the intelligent robot can be more vivid in performance and can know the actual requirements of the user, and the user experience and the user viscosity of the intelligent robot are improved.

Description

Intelligent robot-oriented man-machine interaction method and device
Technical Field
The invention relates to the technical field of robots, in particular to a man-machine interaction method and device for an intelligent robot.
Background
With the continuous development of science and technology and the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field and gradually expanded to the fields of medical treatment, health care, families, entertainment, service industry and the like. The requirements of people on the robot are also improved from simple and repeated mechanical actions to an intelligent robot with anthropomorphic question answering, autonomy and interaction with other robots, and human-computer interaction also becomes an important factor for determining the development of the intelligent robot. Therefore, the improvement of the interaction capability of the intelligent robot and the improvement of the human-like nature and intelligence of the robot are important problems to be solved urgently at present.
Disclosure of Invention
In order to solve the above problem, the present invention provides a human-computer interaction method for an intelligent robot, wherein in a picture book reading mode, the method comprises:
step one, obtaining multi-mode interaction information about a current user;
step two, judging whether a preset active pushing condition is met or not according to the multi-mode interaction information, and if so, executing step three;
and step three, generating and outputting corresponding sketch recommendation information according to the multi-mode interaction information.
According to an embodiment of the present invention, in the second step, if the new picture book information input by the current user cannot be obtained within a first preset time after the reading of a picture book is completed, it is determined that a preset active push condition is satisfied.
According to an embodiment of the present invention, in the second step, if the current page identification fails to be detected for multiple times in the process of drawing book reading, it is determined that the preset active push condition is satisfied.
According to an embodiment of the present invention, if the current page identification fails to be detected for multiple times during the process of drawing book reading, in the third step, first query information is generated and feedback information of the current user for the first query information is obtained, and the reading of the current drawing book is continued or the drawing book is switched according to the feedback information.
According to an embodiment of the present invention, in the second step, if it is detected that the time length that the current user stays on the current page exceeds a second preset time length, it is determined that a preset active push condition is satisfied.
According to an embodiment of the present invention, in the second step, emotion information of the current user is determined according to the multi-modal interaction information, wherein if the emotion information of the current user belongs to a preset positive emotion or a preset negative emotion, it is determined that a preset active push condition is satisfied, so that the sketch recommendation information is generated according to the emotion information of the user in the third step.
According to an embodiment of the present invention, in the second step, the interaction intention of the current user is determined according to the multi-modal interaction information, whether a preset active push condition is met or not is judged according to the interaction intention, and if yes, corresponding sketch recommendation information is generated according to the interaction intention in the third step.
According to an embodiment of the invention, the method further comprises:
and step four, acquiring feedback information of the current user aiming at the picture book recommendation information, and pushing a corresponding picture book or continuing current operation according to the feedback information.
The invention also provides a readable storage medium having stored thereon program code executable to perform the method steps of any of the above.
The invention also provides a man-machine interaction system facing the intelligent robot, which is characterized in that the system is provided with an operating system, and the operating system can load and execute the program codes on the readable storage medium.
The invention also provides a human-computer interaction device for the intelligent robot, which comprises:
the interaction information acquisition module is used for acquiring multi-mode interaction information about the current user;
and the picture book recommendation information generation module is used for judging whether a preset active push condition is met or not according to the multi-mode interaction information, and if so, generating and outputting corresponding picture book recommendation information according to the multi-mode interaction information.
According to an embodiment of the present invention, if new picture book information input by the current user cannot be acquired within a first preset time after a picture book is read, the picture book recommendation information generation module is configured to determine that a preset active push condition is satisfied.
According to an embodiment of the present invention, if multiple failures of current page recognition are detected in a picture book reading process, the picture book recommendation information generation module is configured to determine that the preset active push condition is satisfied.
According to an embodiment of the present invention, if multiple failures of current page recognition are detected in a process of drawing book reading, the drawing book recommendation information generation module is configured to generate first query information and obtain feedback information of the current user for the first query information, and continue reading of the current drawing book or switch the drawing book according to the feedback information.
According to an embodiment of the present invention, if it is detected that the time length of the current user staying on the current page exceeds a second preset time length, the book recommendation information generation module is configured to determine that a preset active push condition is satisfied.
According to an embodiment of the invention, the sketch recommendation information generation module is configured to determine emotion information of the current user according to the multi-modal interaction information, wherein if the emotion information of the current user belongs to a preset positive emotion or a preset negative emotion, it is determined that a preset active push condition is satisfied, so as to generate the sketch recommendation information according to the emotion information of the user.
According to an embodiment of the invention, the sketch recommendation information generation module is configured to determine an interaction intention of the current user according to the multi-modal interaction information, judge whether a preset active push condition is met according to the interaction intention, and generate corresponding sketch recommendation information according to the interaction intention if the preset active push condition is met.
According to an embodiment of the invention, the method further comprises:
and the picture book pushing module is used for acquiring feedback information of the current user aiming at the picture book recommendation information, and pushing a corresponding picture book or continuing the current operation according to the feedback information.
The invention also provides a children-specific intelligent device, which comprises a processor and a storage device, wherein the storage device stores programs, and the processor is used for executing the programs in the storage device to realize the method.
The man-machine interaction method for the intelligent robot can actively judge whether the picture story needs to be pushed to the user or not, so that the problem of poor user experience caused by the fact that the user only reads the identified picture story can be solved. The method determines the actual state of the user according to the obtained multi-mode interaction information about the user, and further adjusts the picture book reading state of the intelligent robot according to the actual state (such as page turning situation, emotion state, interaction intention and the like) of the user, so that the intelligent robot is more vivid in performance and knows the actual requirements of the user, and the user experience and the user viscosity of the intelligent robot are improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the drawings required in the description of the embodiments or the prior art:
FIG. 1 is a flow chart illustrating an implementation of a human-machine interaction method for an intelligent robot according to an embodiment of the present invention;
fig. 2 to 7 are schematic implementation flow diagrams of a human-machine interaction method for an intelligent robot according to different embodiments of the present invention;
FIG. 8 is a schematic structural diagram of a human-computer interaction device for an intelligent robot according to one embodiment of the invention;
FIG. 9 is a flowchart illustrating an implementation of the same human-machine interaction for a smart robot, according to an embodiment of the invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details or with other methods described herein.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions and, although a logical order is illustrated in the flow charts, in some cases, the steps illustrated or described may be performed in an order different than here.
Along with artificial intelligence's development, more and more draw this reading robot and promote children's early education market, carry out voice broadcast or screen display through the object on the discernment card, for example, contents such as various shape vehicles, musical instrument, animals and plants to promote children's cognitive ability.
However, the existing drawing reading robot can only read the recognized drawing image, and cannot adjust the drawing reading state of the robot according to the actual state of the user, so that the drawing reading robot is too stiff, and the drawing reading robot is not beneficial to popularization and use.
Aiming at the problems in the prior art, the invention provides a novel human-computer interaction method for an intelligent robot, which can realize the active push of a drawing book in the human-computer interaction process, thereby improving the user experience of equipment.
Fig. 1 shows a schematic implementation flow diagram of the human-computer interaction method for the intelligent robot.
As shown in fig. 1, in the implementation process, first, multi-modal interaction information about a current user is obtained in step S101. It is noted that, in different embodiments of the present invention, the multi-modal interaction information about the user acquired in step S101 by the method in the sketch reading mode may contain interaction information in different reasonable forms according to different practical situations. For example, in an embodiment of the present invention, the multi-modal interaction information about the user acquired by the method in step S101 may be image information or voice information containing the user, image information containing the state of the picture book, or information transmitted from the mobile client through an internet of things.
After obtaining the multi-modal interaction information about the current user, the method determines in step S102 whether a preset active push condition is satisfied according to the multi-modal interaction information obtained in step S101. If the multi-modal interaction information represents that the current interaction scene meets the preset active push condition, in step S103, the method generates corresponding sketch recommendation information according to the multi-modal interaction signal about the user, which is acquired in step S101, and outputs the sketch recommendation information to the current user.
As shown in fig. 1, optionally, after the generated sketch recommendation information is output to the current user, in step S104, the method continuously obtains feedback information input by the current user for the sketch recommendation information. After obtaining the feedback information, the method will push the corresponding script or continue the current operation according to the feedback information in step S105.
For example, if the user is not interested in the sketch recommendation information output by the method in step S103, the method may not obtain valid feedback information or obtain information indicating that the user desires to continue reading the current sketch in step S104, and then the method continues to operate currently in step S105.
If the user is interested in the active picture recommendation information (e.g., "any lions") output by the method in step S103, the method will obtain positive feedback information (e.g., "casting, i want to listen to this story" or "speak fast" or the like) in step S104, and then the method will push the picture corresponding to the picture recommendation information to the current user in step S105.
It should be noted that, in some embodiments of the present invention, the feedback information of the current user obtained by the method in step S104 may also be "i want to listen to any lions", for example, then the method re-determines the needed to push the sketch (i.e. the story "any lions") according to the above feedback information of the current user in step S105, and outputs the content of the sketch to the current user (e.g. playing the story voice of "any lions" sketch to the current user, etc.).
It should be noted that, in this embodiment, after the intelligent robot is powered on, the method may also determine that the preset active push condition is satisfied at this time, so as to generate and output corresponding sketch recommendation information according to the obtained multimodal interaction information actively.
In order to more clearly illustrate the implementation principle, implementation process and advantages of the human-computer interaction method for the intelligent robot provided by the invention, the method is further described below by combining different embodiments respectively.
The first embodiment is as follows:
fig. 2 shows a flow chart of an implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 2, in the embodiment, after the reading of the current sketch is completed, the method may obtain the multi-modal interaction information about the current user in step S201 in the implementation process, and the principle and process of the method are similar to those described in step S101, so that details of the relevant content of step S201 are not repeated herein.
After obtaining the multi-modal interaction information about the current user, in this embodiment, the method determines in step S202 whether the current user obtains new textbook information "such as a textbook cover or a new textbook inner page" within a first preset time period after the current textbook is completely read according to the obtained multi-modal interaction information input by the current user.
Specifically, in this embodiment, after the current textbook is read, the method preferably starts timing and continuously obtains the textbook cover or textbook inner page identification image.
It should be noted that the specific value of the first preset time period may be configured to be different reasonable values according to actual needs, and the specific value of the first preset time period is not limited in the present invention.
In this embodiment, if the timing duration reaches the first preset duration and no new picture book information is obtained, the method determines that the preset active push condition is satisfied at this time, and then executes steps S203 to S205. The implementation principle and implementation process of the steps S203 to S205 are similar to those described in the steps S103 to S105, and therefore the detailed description of the steps S203 to S205 is not repeated herein.
Example two:
fig. 3 shows a flow chart of an implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 3, in the present embodiment, the method obtains multi-modal interaction information about the current user in step S301. Specifically, in this embodiment, the multi-modal interaction information acquired by the method in step S301 preferably includes sketch image information.
After obtaining the multi-modal interaction information, the method detects whether a valid textbook inner page can be detected according to the multi-modal post-assembly information in step S302. If the current page identification fails for multiple times in the process of drawing book reading, the method also judges that the preset active push condition is met at the moment.
For example, if the current user is turning over the book continuously, the method cannot recognize the current page according to the obtained picture of the sketch because the sketch does not stay on a certain sketch page for a sufficient time, thereby causing a failure in recognizing the current page of the sketch. When the number of times of the current page identification failure reaches the designated number of times (the designated number of times may be configured to different reasonable values according to actual needs, and is not limited herein), the method determines that the preset active push condition is satisfied at this time in step S302.
As shown in fig. 3, in this embodiment, if the preset active push condition is satisfied, in step S303, the method generates and outputs corresponding sketch recommendation information according to the obtained multi-modal interaction information.
Preferably, after outputting the sketch recommendation information to the current user, the method continuously obtains feedback information of the current user for the sketch recommendation information in step S304, and pushes a corresponding sketch to the current user according to the feedback information in step S305, which is to continue the current operation.
In this embodiment, the implementation principle and implementation process of the steps S303 to S305 are similar to those of the steps S103 to S105, and therefore detailed descriptions of the steps S303 to S305 are not repeated herein.
Example three:
fig. 4 shows a flow chart of implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 4, the method provided by the present embodiment obtains multi-modal interaction information about the current user in step S401. Specifically, in this embodiment, the multi-modal interaction information acquired by the method in step S401 preferably includes sketch image information.
After obtaining the multi-modal interaction information, the method detects whether a valid picture book page can be detected according to the multi-modal interaction information in step S402, or detects the last page of the picture book in the reading process of the picture book, that is, the current page is the last page of the picture book. If the current page identification fails for multiple times in the process of drawing book reading, the method also judges that the preset active push condition is met at the moment.
The specific implementation principle and implementation process of the steps S401 and S402 are similar to those disclosed in the steps S301 and S302, and therefore the steps S401 and S402 are not described herein again.
As shown in fig. 4, in the present embodiment, if the preset active push condition is satisfied, it indicates that the current user is likely to dislike the current sketch, so the method generates and outputs the first query information in step S403. Subsequently, the method continuously obtains the feedback information of the current user for the first query information in step S404, and continues reading the current rendering or switches the rendering according to the feedback information in step S405.
For example, if the current page recognition fails several times during the drawing reading process, the method may perform voice interaction with the current user, outputting voice information such as "how do you dislike the book" to the user. And the current user inputs corresponding feedback information aiming at the voice information. If the feedback information input by the current user is 'i dislike' voice information, the method switches the picture book, and other picture books are pushed to the current user; and if the feedback information input by the current user is the voice information of 'i like', the method continues reading the current drawing book.
Example four:
fig. 5 shows a flow chart of an implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 5, the man-machine interaction method of the user intelligent robot provided by the present embodiment obtains multi-modal interaction information about the current user in step S501. Specifically, in this embodiment, the multi-modal interaction information acquired by the method in step S501 preferably includes sketch image information.
Subsequently, the method determines the staying time of the current user on the current page according to the multi-modal interaction information obtained in step S501 in step S502. If the stay time of the current user on the current page exceeds a second preset time, it indicates that the current user is likely to be uninterested in the current picture book. Therefore, at this time, the method determines in step S503 that the preset active push condition is satisfied, so as to generate and output corresponding sketch recommendation information according to the multi-modal interaction information acquired in step S501.
It should be noted that, the present invention does not limit the specific value of the second preset time period, and in different embodiments of the present invention, the second preset time period may be configured to be different reasonable values according to actual needs.
Of course, in other embodiments of the present invention, according to actual needs, the method may further obtain feedback information of the current user for the drawing suggestion information after the drawing recommendation information is output, and push the corresponding drawing or continue the current operation according to the feedback information. The principle and process of obtaining the feedback information and pushing the corresponding textbook or continuing the current operation according to the feedback information in the method are similar to those described in the above step S104 and step S105, and therefore, the details thereof are not repeated herein.
Example five:
fig. 6 shows a flow chart of an implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 6, the man-machine interaction method of the user intelligent robot provided by the present embodiment obtains multi-modal interaction information about the current user in step S601. In this embodiment, the multi-modal interaction information acquired by the method in step S601 preferably includes image information and/or voice information about the current user.
The method preferably determines emotional information of the current user based on the multi-modal interaction information in step S602. In this embodiment, optionally, the method may determine the face position of the current user in a face recognition manner according to the image information of the user, so as to further recognize emotion information represented by the face, and further obtain emotion information of the current user.
Of course, in other embodiments of the present invention, the method may also determine the emotion information of the current user in other reasonable manners according to actual situations, and the present invention is not limited thereto. For example, in one embodiment of the present invention, the method may further determine the emotion information of the current user by recognizing the voiceprint information of the user, or by combining multiple ways to determine the emotion information of the current user.
As shown in fig. 6, after obtaining the emotion information of the current user, the method determines in step S603 whether the emotion information of the current user obtained in step S602 belongs to a preset positive emotion or a preset negative emotion.
If the emotion information of the current user belongs to the preset positive emotion information or the preset negative emotion information, the method determines that the preset active push condition is met in step S604, and then generates the picture book recommendation information according to the emotion information of the user.
For example, if it is recognized that the user is happy or sad when reading a certain page, the method pushes corresponding sketch recommendation information to the current user in an active pushing manner.
Example six:
fig. 7 shows a flow chart of an implementation of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 7, the man-machine interaction method of the user intelligent robot provided by the present embodiment obtains multi-modal interaction information about the current user in step S701. Subsequently, the method determines the interaction intention of the current user according to the multi-modal interaction information in step S702.
In this embodiment, the method preferably determines the interaction intention of the user by using a preset intention map. The interaction intention can be regarded as the intention that the robot tries to understand the man-machine interaction process in the self view and the user expects to achieve a certain purpose under a certain theme or topic. Because the content related to the interactive topic is wide, the method needs to use the intention map to mine and determine the information which needs to be acquired from the robot by the user in the subsequent human-computer interaction process (namely, the information which needs to be fed back to the user by the robot).
Specifically, in this embodiment, when determining the interaction intention of the user according to the interaction topic, the method first determines a node corresponding to the interaction topic in a preset intention map, and then determines a node (i.e., a terminal node) corresponding to a connection line using the node corresponding to the interaction topic as an initial node in the preset intention map, so as to determine the user interaction intention according to the terminal node.
Since there may be a plurality of nodes connected to the initial node, there may be a plurality of terminal nodes determined by the method. For this situation, in this embodiment, the method first determines a plurality of candidate user intentions according to a plurality of nodes connected to the initial node, then performs confidence ranking on the candidate user intentions, and determines the required user intention according to a ranking result.
Specifically, in this embodiment, the method ranks the candidate user intentions according to the weight of each node connecting line in the preset intention picture, and selects the candidate user intention with the largest weight as the final required user intention.
Of course, in other embodiments of the invention, the method may also use other reasonable ways to determine the user intent from the multi-modal input information, and the invention is not limited thereto.
Subsequently, the method determines in step S703 whether the interaction intention satisfies a predetermined active push condition. If so, the method preferably generates corresponding sketch recommendation information according to the interaction intention in step S704.
For example, when the user inputs voice interaction information such as "you will read XX book" to the intelligent robot, the method determines the interaction intention of the current user by performing interaction intention recognition on the voice interaction information. Then, based on the interaction intention, the method queries the sketch-related knowledge graph to determine whether related content of the XX book can be pushed. If the related content of the XX book can be pushed, the method generates and outputs corresponding indication information, so that the current user is prompted to take the book, the related content of the book is further taught to the user, and the pushing of the drawing book is achieved.
It should be noted that in other embodiments of the present invention, the method may further obtain a new human-computer interaction method by combining one or some of the above embodiments, so as to realize active push of the story book.
The man-machine interaction method for the intelligent robot provided by the embodiment is described as being implemented in a computer system. The computer system may be provided, for example, in a control core processor of the robot. For example, the methods described herein may be implemented as software executable with control logic that is executed by a CPU in a robotic operating system. The functionality described herein may be implemented as a set of program instructions stored in a non-transitory tangible computer readable medium.
When implemented in this manner, the computer program comprises a set of instructions which, when executed by a computer, cause the computer to perform a method capable of carrying out the functions described above. Programmable logic may be temporarily or permanently installed in a non-transitory tangible computer-readable medium, such as a read-only memory chip, computer memory, disk, or other storage medium. In addition to being implemented in software, the logic described herein may be embodied using discrete components, integrated circuits, programmable logic used in conjunction with a programmable logic device such as a Field Programmable Gate Array (FPGA) or microprocessor, or any other device including any combination thereof. All such embodiments are intended to fall within the scope of the present invention.
It can be seen from the above description that the human-computer interaction method for the intelligent robot provided by the invention can actively judge whether the picture story needs to be pushed to the user, so that the problem of poor user experience caused by the fact that the user only reads the identified picture of the picture story can be avoided. The method determines the actual state of the user according to the obtained multi-mode interaction information about the user, and further adjusts the picture book reading state of the intelligent robot according to the actual state (such as page turning situation, emotion state, interaction intention and the like) of the user, so that the intelligent robot is more vivid in performance and knows the actual requirements of the user, and the user experience and the user viscosity of the intelligent robot are improved.
Meanwhile, the invention also provides a man-machine interaction device for the intelligent robot. Fig. 8 shows a schematic structural diagram of the human-computer interaction device in this embodiment.
As shown in fig. 8, the human-computer interaction device provided by the present embodiment preferably includes: an interactive information acquisition module 801, a sketch recommendation information generation module 802 and a sketch pushing module 803. Wherein, the interaction information obtaining module 801 is used for obtaining multi-modal interaction information about the current user,
the book drawing recommendation information generating module 802 is connected to the interactive information acquiring module 801, and can determine whether a preset active push condition is satisfied according to the multi-modal interactive information transmitted by the interactive information acquiring module 801. If the preset active pushing condition is satisfied, the sketch recommendation information generation module 802 generates and outputs corresponding sketch recommendation information according to the multi-modal interaction information.
In this embodiment, the picture book pushing module 803 is configured as an optional configuration, and is connected to the picture book recommendation information generating module 802. The book drawing pushing module 803 can obtain feedback information of the current user for the book drawing recommendation information, and push a corresponding book drawing or continue the current operation according to the feedback information.
It should be noted that in this embodiment, the principle and the process of the interactive information obtaining module 801, the sketch recommendation information generating module 802, and the sketch pushing module 803 for implementing their respective functions are similar to those disclosed in the above steps S101 to S105, and therefore specific contents of the interactive information obtaining module 801, the sketch recommendation information generating module 802, and the sketch pushing module 803 are not described herein again.
The invention also provides a children-specific intelligent device, which comprises a processor and a storage device, wherein the storage device stores programs, and the processor can execute the programs in the storage device to realize the method.
The present invention also provides a readable storage medium storing program code that, when executed by an operating system, is capable of implementing the human-machine interaction method for an intelligent robot as described above. In addition, the invention also provides a man-machine interaction system facing the intelligent robot, and the system is provided with an operating system which can load and execute the program codes on the storage medium.
Specifically, as shown in fig. 9, in this embodiment, the child dedicated device 901 and the cloud server 902 of the man-machine interaction system for the intelligent robot are provided. The child-specific device 901 can execute a program code that can implement the aforementioned human-computer interaction method for the intelligent robot in cooperation with the cloud server 902, and then push a corresponding sketch story to a user.
Specifically, in this embodiment, the child-specific device 901 is configured to obtain multi-modal interaction information about a current user, and transmit the multi-modal interaction information to the cloud server 902. For example, the child dedicated device 901 may determine the reading state of the current user 903 by obtaining the relevant image of the sketch 904, and meanwhile, the child dedicated device 901 may also obtain corresponding sketch content information by obtaining the relevant image of the sketch 904, and generate and output corresponding sketch voice information according to the sketch content information, thereby implementing the sketch reading function.
The cloud server 902 can determine whether the preset active push condition is satisfied according to the multi-modal interaction information transmitted by the child dedicated device 901. If the preset active push condition is satisfied, the cloud server 902 generates corresponding drawing recommendation information according to the multi-modal interaction information and transmits the drawing recommendation information to the child dedicated device 901. After receiving the drawing recommendation information, the child dedicated device 901 outputs the drawing recommendation information to the current user 903.
In this embodiment, the human-computer interaction system utilizes the powerful data processing capability of the cloud server 902 to quickly determine whether the preset active push condition is satisfied and corresponding picture recommendation information is generated, so that the requirement on the data processing capability of the special device 901 for children can be reduced, the interaction efficiency can be improved, and the volume and the cost of the special device 901 for children can be effectively reduced.
It is noted that, in different embodiments of the present invention, optionally, for the human-computer interaction system, part of the data processing functions may also be implemented by the child-specific device 901, and the present invention is not limited thereto.
In different embodiments of the present invention, the child-dedicated device 901 may be an intelligent device including an input/output module supporting sensing, controlling, and the like, such as a tablet computer, a robot, a mobile phone, a story machine, or a book-drawing reading robot, and can tell a story to a child, solve a problem posed by the child in real time, and have rich performance.
It is to be understood that the disclosed embodiments of the invention are not limited to the particular structures or process steps disclosed herein, but extend to equivalents thereof as would be understood by those skilled in the relevant art. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While the above examples are illustrative of the principles of the present invention in one or more applications, it will be apparent to those of ordinary skill in the art that various changes in form, usage and details of implementation can be made without departing from the principles and concepts of the invention. Accordingly, the invention is defined by the appended claims.

Claims (10)

1. A human-computer interaction method for an intelligent robot, wherein in a picture book reading mode, the method comprises:
step one, obtaining multi-mode interaction information about a current user;
step two, judging whether a preset active pushing condition is met or not according to the multi-mode interaction information, and if so, executing step three; in the second step, whether a preset active push condition is met is judged by at least one of the following operations:
if the current page identification fails for multiple times in the process of drawing book reading, judging that the preset active push condition is met;
determining the interaction intention of the current user according to the multi-mode interaction information by using a preset intention map, and judging whether a preset active pushing condition is met or not according to the interaction intention;
the process of determining the interaction intention of the current user according to the multi-modal interaction information by using a preset intention map comprises the following steps:
determining a node corresponding to a current interactive topic in a preset intention map, determining a terminal node corresponding to a connecting line which takes the node corresponding to the interactive topic as an initial node in the preset intention map, and determining a user interactive intention according to the terminal node;
according to the situation that a plurality of terminal nodes are provided, a plurality of candidate user intentions are determined according to the terminal nodes connected with the initial node, then confidence degree sequencing is carried out on the determined candidate user intentions, and the required user intentions are determined according to the sequencing result;
and step three, generating and outputting corresponding sketch recommendation information according to the multi-mode interaction information.
2. The human-computer interaction method according to claim 1, wherein in the second step, it is further determined whether a preset active push condition is satisfied by: and if the new picture book information input by the current user cannot be acquired within a first preset time after the reading of a picture book is finished, judging that a preset active push condition is met.
3. The human-computer interaction method as claimed in claim 1, wherein if multiple failures of current page recognition are detected during the process of drawing book reading, a first query message is generated and feedback information of the current user for the first query message is obtained in the third step, and the reading of the current drawing book is continued or the drawing book is switched according to the feedback information.
4. The method according to any one of claims 1 to 3, wherein in the second step, it is further determined whether a preset active push condition is satisfied by: and if the time length of the current user staying on the current page is detected to exceed a second preset time length, judging that a preset active pushing condition is met.
5. The method according to claim 4, wherein in the second step, it is further determined whether a preset active push condition is satisfied by: determining emotion information of the current user according to the multi-modal interaction information, wherein if the emotion information of the current user belongs to a preset positive emotion or a preset negative emotion, a preset active push condition is judged to be met;
and generating the sketch recommendation information according to the emotion information of the user in the third step.
6. The method as recited in claim 5, wherein said method further comprises:
and step four, acquiring feedback information of the current user aiming at the picture book recommendation information, and pushing a corresponding picture book or continuing current operation according to the feedback information.
7. A readable storage medium having stored thereon program code executable to perform the method steps of any of claims 1-6.
8. A human-computer interaction system oriented towards an intelligent robot, characterized in that the system is equipped with an operating system capable of loading and executing program code on a readable storage medium according to claim 7.
9. A human-computer interaction device for an intelligent robot, the device comprising:
the interaction information acquisition module is used for acquiring multi-mode interaction information about the current user;
the picture book recommendation information generation module is used for judging whether a preset active push condition is met or not according to the multi-mode interaction information, and if so, generating and outputting corresponding picture book recommendation information according to the multi-mode interaction information;
the picture book recommendation information generation module is configured to judge whether a preset active push condition is met through at least one of the following operations:
if the current page identification fails for multiple times in the process of drawing book reading, judging that the preset active push condition is met;
determining the interaction intention of the current user according to the multi-mode interaction information by using a preset intention map, and judging whether a preset active pushing condition is met or not according to the interaction intention;
the picture book recommendation information generation module determines the interaction intention of the current user by using a preset intention map by executing the following operations:
determining a node corresponding to a current interactive topic in a preset intention map, determining a terminal node corresponding to a connecting line which takes the node corresponding to the interactive topic as an initial node in the preset intention map, and determining a user interactive intention according to the terminal node;
and determining a plurality of candidate user intentions according to the plurality of terminal nodes connected with the initial node, further performing confidence ranking on the determined plurality of candidate user intentions, and determining the required user intention according to a ranking result.
10. A child-specific smart device, comprising a processor and a storage device, wherein the storage device stores a program, and the processor is configured to execute the program stored in the storage device to implement the method according to any one of claims 1 to 6.
CN201811632185.2A 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device Active CN109857929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811632185.2A CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811632185.2A CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Publications (2)

Publication Number Publication Date
CN109857929A CN109857929A (en) 2019-06-07
CN109857929B true CN109857929B (en) 2021-06-15

Family

ID=66893117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811632185.2A Active CN109857929B (en) 2018-12-29 2018-12-29 Intelligent robot-oriented man-machine interaction method and device

Country Status (1)

Country Link
CN (1) CN109857929B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929143A (en) * 2019-10-12 2020-03-27 安徽奇智科技有限公司 Method and system for identifying picture book and electronic equipment
CN111028290B (en) * 2019-11-26 2024-03-08 北京光年无限科技有限公司 Graphic processing method and device for drawing book reading robot
CN110941774A (en) * 2019-12-05 2020-03-31 深圳前海达闼云端智能科技有限公司 Service recommendation method
CN111723653B (en) * 2020-05-12 2023-09-26 北京光年无限科技有限公司 Method and device for reading drawing book based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830902A (en) * 2012-06-29 2012-12-19 宇龙计算机通信科技(深圳)有限公司 Method and system for automatically scrolling page
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN106598241A (en) * 2016-12-06 2017-04-26 北京光年无限科技有限公司 Interactive data processing method and device for intelligent robot
CN107506377A (en) * 2017-07-20 2017-12-22 南开大学 This generation system is painted in interaction based on commending system
CN107783650A (en) * 2017-09-18 2018-03-09 北京光年无限科技有限公司 A kind of man-machine interaction method and device based on virtual robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060257830A1 (en) * 2005-05-13 2006-11-16 Chyi-Yeu Lin Spelling robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830902A (en) * 2012-06-29 2012-12-19 宇龙计算机通信科技(深圳)有限公司 Method and system for automatically scrolling page
CN105511608A (en) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 Intelligent robot based interaction method and device, and intelligent robot
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN106598241A (en) * 2016-12-06 2017-04-26 北京光年无限科技有限公司 Interactive data processing method and device for intelligent robot
CN107506377A (en) * 2017-07-20 2017-12-22 南开大学 This generation system is painted in interaction based on commending system
CN107783650A (en) * 2017-09-18 2018-03-09 北京光年无限科技有限公司 A kind of man-machine interaction method and device based on virtual robot

Also Published As

Publication number Publication date
CN109857929A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109857929B (en) Intelligent robot-oriented man-machine interaction method and device
CN107728780B (en) Human-computer interaction method and device based on virtual robot
US11302302B2 (en) Method, apparatus, device and storage medium for switching voice role
CN108108340B (en) Dialogue interaction method and system for intelligent robot
KR102270394B1 (en) Method, terminal, and storage medium for recognizing an image
KR102411766B1 (en) Method for activating voice recognition servive and electronic device for the same
CN106847274B (en) Man-machine interaction method and device for intelligent robot
CN106294854B (en) Man-machine interaction method and device for intelligent robot
CN108664472B (en) Natural language processing method, device and equipment
CN107016070B (en) Man-machine conversation method and device for intelligent robot
CN110598576A (en) Sign language interaction method and device and computer medium
CN109858391A (en) It is a kind of for drawing the man-machine interaction method and device of robot
US20190333514A1 (en) Method and apparatus for dialoguing based on a mood of a user
KR102367862B1 (en) Device and method for supporting daily tasks for ADHD childrendaily task performing support apparatus for ADHD children and method therefor
CN104252287A (en) Interactive device and method for improving expressive ability on basis of same
CN110825164A (en) Interaction method and system based on wearable intelligent equipment special for children
CN111538456A (en) Human-computer interaction method, device, terminal and storage medium based on virtual image
CN112912955B (en) Electronic device and system for providing speech recognition based services
CN111968641A (en) Voice assistant wake-up control method and device, storage medium and electronic equipment
CN111933137B (en) Voice wake-up test method and device, computer readable medium and electronic equipment
CN106599179B (en) Man-machine conversation control method and device integrating knowledge graph and memory graph
EP3677392B1 (en) Robot and method of controlling the same
CN110335599B (en) Voice control method, system, equipment and computer readable storage medium
CN109087644B (en) Electronic equipment, voice assistant interaction method thereof and device with storage function
CN116578686A (en) Session processing method, device, equipment and storage medium based on man-machine interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant