CN110480656B - Accompanying robot, accompanying robot control method and accompanying robot control device - Google Patents

Accompanying robot, accompanying robot control method and accompanying robot control device Download PDF

Info

Publication number
CN110480656B
CN110480656B CN201910848250.3A CN201910848250A CN110480656B CN 110480656 B CN110480656 B CN 110480656B CN 201910848250 A CN201910848250 A CN 201910848250A CN 110480656 B CN110480656 B CN 110480656B
Authority
CN
China
Prior art keywords
instruction
information
prediction
biological characteristic
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910848250.3A
Other languages
Chinese (zh)
Other versions
CN110480656A (en
Inventor
张腾宇
张静莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Research Center for Rehabilitation Technical Aids
Original Assignee
National Research Center for Rehabilitation Technical Aids
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Research Center for Rehabilitation Technical Aids filed Critical National Research Center for Rehabilitation Technical Aids
Priority to CN201910848250.3A priority Critical patent/CN110480656B/en
Publication of CN110480656A publication Critical patent/CN110480656A/en
Application granted granted Critical
Publication of CN110480656B publication Critical patent/CN110480656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • B25J11/0085Cleaning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The application provides a accompanying robot, a control method and a control device of the accompanying robot, wherein the accompanying robot comprises a biological characteristic acquisition module and a processor; the biological characteristic acquisition module is used for acquiring biological characteristic information of a user every preset time and transmitting the biological characteristic information to the processor; the processor is used for receiving the biological characteristic information transmitted by the biological characteristic acquisition module and predicting the emotion of the user based on the biological characteristic information; and inputting the predicted emotion into a pre-trained first instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction. Through this kind of accompanying robot, can improve the precision of user's demand instruction prediction.

Description

Accompanying robot, accompanying robot control method and accompanying robot control device
Technical Field
The application relates to the technical field of automatic control, in particular to an accompanying robot, and a control method and device of the accompanying robot.
Background
With the progress of aging of Chinese population and the increase of solitary children, many children cannot accompany the elderly parents all the time, which leads to the daily increase of the proportion of the empty-nest elderly in the elderly. The mental health problem of the elderly has become an important social problem. According to statistics, over 70 percent of the elderly have symptoms of autism, especially the elderly with weakness, disability, solitary life, old age and loss of spouse.
In order to solve the problems of the elderly accompanying the feelings and living care, it has been proposed in recent years to accompany the elderly with a robot instead of a human. When an existing accompanying robot attends to the elderly, corresponding operations can be generally executed only according to instructions input by the elderly, however, because the services required by the elderly under different moods may be different, the mode of providing services only according to the instructions without considering the mood changes of users cannot meet the emotional accompanying requirements of the elderly.
Disclosure of Invention
In view of the above, an object of the present application is to provide an accompanying robot, an accompanying robot control method and an accompanying robot control device, so as to provide a service reflecting real emotional needs of a user and improve the precision of user demand instruction prediction.
In a first aspect, an embodiment of the application provides an accompanying robot, which includes a biological feature acquisition module and a processor;
the biological characteristic acquisition module is used for acquiring biological characteristic information of a user every preset time length and transmitting the biological characteristic information to the processor;
the processor is used for receiving the biological characteristic information transmitted by the biological characteristic acquisition module and predicting the emotion of the user based on the biological characteristic information; and inputting the predicted emotion into a pre-trained first instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction.
In one possible design, the biometric acquisition module includes at least one of:
the system comprises a sound acquisition module and an image acquisition module;
for the case that the biometric acquisition module comprises a sound acquisition module, the biometric information comprises sound information;
for the case where the biometric acquisition module comprises an image acquisition module, the biometric information comprises facial image information.
In one possible design, the processor, when predicting the emotion of the user based on the biometric information, is specifically configured to:
extracting voice features in the voice information, wherein the voice features comprise short-time energy, a short-time zero-crossing rate, fundamental tone frequency, formant features, speech speed and Mel cepstrum coefficients;
inputting the voice features into a voice recognition submodel to obtain a first score of any preset emotion of the voice features;
inputting the sound information into a semantic identifier model, and extracting semantic keywords in the sound information;
determining a second score of the sound information belonging to any one preset emotion based on the semantic keyword;
inputting the facial image information into a facial recognizer model, and determining a third score of any one preset emotion of the facial image information;
and carrying out weighted summation on the first score, the second score and the third score according to preset weights, and determining the emotion of the user based on the summed scores.
In one possible design, the accompanying robot further includes: a receiving module and a storage module;
the receiving module is used for receiving an instruction of a user and transmitting the instruction to the processor;
the processor is further configured to: when the instruction transmitted by the receiving module is received, controlling the biological characteristic acquisition module to acquire the biological characteristic information of the user; determining behavior information of the user based on the biological characteristic information, and transmitting the instruction, the time for receiving the instruction, and the behavior information to the storage module; the behavior information of the user is used for representing the activity state of the user;
the storage module is used for storing the instruction, the time for receiving the instruction and the behavior information.
In one possible design, the processor is further configured to train the first instruction prediction model by:
instructions for obtaining at least one historical emotion prediction result of the user and associating each historical emotion prediction result;
inputting the historical emotion prediction result into an instruction prediction model to be trained to obtain a prediction instruction corresponding to the historical emotion prediction result;
performing a current round of training on the instruction prediction model to be trained based on the prediction instruction and the instruction associated with the historical emotion prediction result;
and obtaining the first instruction prediction model through multi-round training of the instruction prediction model.
In one possible design, the processor, after receiving the biometric information, is further configured to:
determining behavior information of the user based on the biometric information;
inputting the behavior information of the user and the time for receiving the biological characteristic information into a pre-trained second instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction.
In one possible design, the processor trains the second instruction prediction model according to the following method:
acquiring an instruction stored by the storage module, time for receiving the instruction and behavior information corresponding to the designation;
inputting the time for receiving the instruction and the behavior information corresponding to the designation into a prediction model to be trained, and outputting to obtain a prediction instruction;
performing a current round of training on the prediction model based on the instruction obtained from the storage module and the prediction instruction;
and obtaining the second instruction prediction model through multi-round training of the prediction model.
In one possible design, the accompanying robot further includes: a speech synthesis module;
the voice synthesis module is used for extracting the audio features of the template voice and sending out the voice based on the audio features when receiving the voice playing instruction sent out by the processor.
In one possible design, the accompanying robot further includes: an alarm module;
and the alarm module is used for comparing the behavior information with the abnormal behavior information stored in the storage module, and sending alarm information to the pre-bound equipment when the comparison is successful.
In one possible design, the accompanying robot further includes: the device comprises a distance detection module and a moving module;
the distance detection module is used for detecting the distance between the accompanying robot and the user and sending the distance to the moving module;
and the moving module is used for controlling the accompanying robot to move towards the position where the user is located when the distance is detected to be greater than the preset distance.
In one possible design, the accompanying robot further includes: a cleaning module;
and the cleaning module is used for controlling a cleaning robot connected with the accompanying robot to clean when receiving a cleaning instruction sent by the processor.
In one possible design, infrared positioning devices are arranged on the accompanying robot and the cleaning robot main body;
and after the cleaning robot finishes cleaning, establishing connection with the accompanying robot again based on the infrared positioning device.
In a second aspect, an embodiment of the present application further provides a method for controlling an accompanying robot, including:
receiving biometric information of a user;
predicting an emotion of the user based on the biometric information;
inputting the predicted emotion into a first instruction prediction model trained in advance, and determining a control instruction matched with the biological characteristic information;
and controlling the accompanying robot to perform an operation corresponding to the control instruction.
In a third aspect, an embodiment of the present application further provides a control device for an accompanying robot, including:
the receiving module is used for receiving the biological characteristic information of the user;
a prediction module for predicting an emotion of the user based on the biometric information;
the determining module is used for inputting the predicted emotion into a first instruction prediction model trained in advance and determining a control instruction matched with the biological characteristic information;
and the control module is used for controlling the accompanying robot to perform operation corresponding to the control instruction.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the second aspect.
In a fifth aspect, the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps in the second aspect.
According to the accompanying robot and the accompanying robot control method and device, the biological feature information of a user is collected through the biological feature collection module arranged on the accompanying robot, emotion prediction is conducted according to the biological feature information based on the processor arranged on the robot, the predicted emotion is input into the first instruction prediction model trained in advance, and the control instruction matched with the biological feature information is determined.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic architecture diagram of a nursing robot provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a processing flow of the processor after receiving the biometric information transmitted by the biometric acquisition module according to the embodiment of the present application;
FIG. 3 is a flow chart illustrating a method for predicting a user's emotion according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a first instruction prediction model training method provided by an embodiment of the present application;
fig. 5 shows a schematic structural diagram of another possible accompanying robot provided in the embodiment of the present application;
FIG. 6 is a flowchart illustrating a second instruction prediction model training method provided by an embodiment of the present application;
fig. 7 shows a schematic flow chart of a control method of an accompanying robot provided in an embodiment of the present application;
fig. 8 is a schematic diagram illustrating an architecture of a control device of an accompanying robot according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device 900 provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
For the understanding of the present embodiment, a detailed description will be given to a nursing robot disclosed in the embodiments of the present application.
Referring to fig. 1, an architecture schematic diagram of a robot for accompanying nursing provided in an embodiment of the present application includes a biometric acquisition module and a processor.
Specifically, the biometric feature acquisition module is configured to acquire biometric feature information of the user every preset time period, and then transmit the acquired biometric feature information to the processor.
The biological characteristic acquisition module can comprise at least one of a sound acquisition module and an image acquisition module.
When the biometric feature capture module includes a voice capture module, the biometric feature information includes voice information, and when the biometric feature capture module includes an image capture module, the biometric feature information includes facial image information.
After receiving the biometric information transmitted by the biometric acquisition module, the processor may perform the processing procedure as described in fig. 2, including the following steps:
step 201, based on the biological characteristic information, the emotion of the user is predicted.
In one possible embodiment, the processor, in predicting the emotion of the user based on the biometric information, may perform a method as shown in fig. 3, including:
and 301, extracting the voice features in the voice information.
The speech features in the sound information include short-term energy, short-term zero-crossing rate, fundamental tone frequency, formant features, speech rate, mel cepstrum coefficient, and a specific method for extracting the speech features in the sound information will not be described herein.
Step 302, inputting the voice characteristics to the voice recognition submodel to obtain a first score of the voice characteristics belonging to any one preset emotion.
In one possible embodiment, the preset emotions may include: after inputting a speech feature into the speech recognition submodel, a first score can be obtained that the speech feature belongs to any one of the predetermined emotions.
Illustratively, after a speech feature is entered into the speech recognition submodel, it may be found that the first score for the speech feature is 80 for happy, 30 for obscene, 20 for dysphoric, and 10 for angry.
And step 303, inputting the voice information into a semantic identifier model, and extracting semantic keywords in the voice information.
In one possible embodiment of the present application, after sound information is input to the semantic recognition submodel, the semantic recognition submodel may convert the sound information into corresponding text information, then segment the text information by using an N-gram in the semantic recognition submodel to obtain a word set corresponding to the text information, then compare the word set with keywords pre-stored in a database, and determine words successfully compared as semantic keywords.
And 304, determining a second score of the sound information belonging to any one preset emotion based on the semantic keywords.
And 305, inputting the facial image information into the facial recognizer model, and determining a third score of any preset emotion of the facial image information.
Specifically, before the face image information is input into the face recognition submodel, image features of the face image information, such as geometric features, deformation features, motion features, and the like of each part of the five sense organs, may be extracted, and then the image features may be input into the face recognition submodel, so as to determine a third score indicating that the face image information belongs to any one of the preset emotions.
It should be noted that the execution of steps 301 to 302, 303 to 304, and 305 are not in sequence.
And step 306, carrying out weighted summation on the first score, the second score and the third score according to preset weights, and determining the emotion of the user based on the summed scores.
Wherein, when determining the emotion of the user based on the summed score, the emotion having the highest score may be determined as the emotion of the user.
In one possible application scenario, the accompanying robot may only collect voice information of the user or only collect facial image information of the user, and when predicting the emotion of the user based on the biometric information, a score corresponding to the information that is not collected may be determined to be 0.
For example, if only the voice information of the user is collected, the third score is determined to be 0, and when the score of the emotion of the user is determined, the first score and the second score are weighted and summed according to a preset weight.
In an example of the present application, the face recognition sub-model, the semantic recognition sub-model, and the voice recognition sub-model may be one of a support vector machine, a convolutional neural network, and a K-nearest neighbor classification model.
Step 202, inputting the predicted emotion into a first instruction prediction model trained in advance, and determining a control instruction matched with the biological characteristic information.
And step 203, controlling the robot to perform operation corresponding to the control instruction.
In one possible embodiment, the processor may be trained according to the method shown in fig. 4 when training the first instruction prediction model, and the method includes the following steps:
step 401, obtaining at least one historical emotion prediction result of the user and an instruction associated with each historical emotion prediction result.
In one possible design, the accompanying robot further comprises a storage module, historical emotion prediction results and instructions associated with each historical emotion prediction result are stored in the storage module, and the processor can obtain the historical emotion prediction results and the instructions associated with each historical emotion prediction result from the storage module when training the instruction prediction model.
The historical emotion prediction results stored by the storage module and the instructions associated with each historical emotion prediction result may be that the biological characteristic acquisition module acquires biological characteristic information of the user every preset time period and transmits the biological characteristic information to the processor, the processor predicts the emotion of the user based on the biological characteristic information, and determines the instruction input by the user as the instruction associated with the predicted emotion if the instruction input by the user is received before the biological characteristic information transmitted by the biological characteristic acquisition module is received again, and then stores the predicted emotion and the instruction input by the user in the storage module.
And step 402, inputting the historical emotion prediction result into an instruction prediction model to be trained to obtain a prediction instruction corresponding to the historical emotion prediction result.
And 403, performing a current round of training on the instruction prediction model to be trained based on the prediction instruction and the instruction associated with the historical emotion prediction result.
In a specific implementation, the cross entropy loss in the training process of the current round can be determined based on the prediction instruction and the instruction associated with the historical emotion prediction result, and then the model parameters of the instruction prediction model in the training process of the current round are adjusted based on the cross entropy loss.
And step 404, obtaining a first instruction prediction model through multiple rounds of training of the instruction prediction model.
In the process, the sample data in the training process of the first instruction prediction model is derived from the instruction input by the user and the predicted emotion of the user in each instruction input process, so that the first instruction prediction model trained by the method conforms to the behavior characteristics of the current user and can provide personalized service for the user.
In one possible application scenario, if the user includes multiple persons, the processor may identify the multiple users according to the biometric information, and store the emotion prediction results of the different users and instructions associated with each emotion prediction result in different storage modules. When the processor trains the first instruction prediction model, different first instruction prediction models can be obtained through training according to the emotion prediction results stored in different storage modules and the instruction associated with each emotion prediction result. When the processor receives the biological characteristic information, the user can be identified based on the biological characteristic information, then a first instruction prediction model corresponding to the user is determined, and the control instruction is predicted based on the first instruction prediction model.
In the solution provided in this embodiment, the biometric information of the user is collected by the biometric collection module once every preset time period, and in another possible implementation, the biometric collection module may further collect the biometric information of the user after the accompanying robot receives an instruction input by the user.
Referring to fig. 5, a schematic structural diagram of another possible accompanying robot provided in the embodiment of the present application is shown, where the accompanying robot may further include a receiving module and a storing module;
the receiving module is used for receiving the instruction of a user and transmitting the received instruction to the processor;
the processor can be further used for controlling the biological characteristic acquisition module to acquire the biological characteristic information of the user when receiving the instruction transmitted by the receiving module, determining the behavior information of the user based on the biological characteristic information, and transmitting the received instruction, the instruction receiving time and the determined behavior information to the storage module;
and the storage module is used for storing the received instruction transmitted by the processor, the time for receiving the instruction and the determined behavior information.
The behavior information of the user is used for representing the activity state of the user, for example, the behavior information may include sitting still, sleeping, getting up, walking, falling, and the like.
In a possible implementation manner, after the processor receives the biological characteristic information, the processor may further determine behavior information of the user based on the biological characteristic information, then input the behavior information of the user and the time for receiving the biological characteristic information into a second instruction prediction model trained in advance, determine a control instruction matched with the biological characteristic information, and control the accompanying robot to perform an operation corresponding to the control instruction.
In one possible application scenario, if the user includes multiple persons, the processor may identify the multiple users according to the biometric information, and send behavior information of different users and time for receiving the biometric information to different storage modules. The second instruction prediction model can also train different second instruction prediction models according to the behavior information of the user, the time for receiving the biological characteristic information and the related instruction, which are stored in different storage modules, and when the biological characteristic information is received, the corresponding second instruction prediction model is determined according to the biological characteristic information, and then the control instruction is predicted.
In one possible design, the accompanying robot may further include an alarm module, and the alarm module may compare the behavior information with the abnormal behavior information stored in the storage module, and send alarm information to the pre-bound device when the comparison is successful.
The method for training the second instruction prediction model may refer to the method shown in fig. 6, and includes the following steps:
step 601, obtaining the instruction stored in the storage module, the time for receiving the instruction, and specifying the corresponding behavior information.
Step 602, inputting the time for receiving the instruction and the designated corresponding behavior information into a prediction model to be trained, and outputting to obtain a prediction instruction.
And 603, performing the training of the current round on the prediction model based on the instruction acquired from the storage module and the prediction instruction.
When the current round of training is performed on the prediction model based on the prediction instruction and the instruction acquired from the storage module, the cross entropy in the current round of training process may be determined based on the prediction instruction and the instruction acquired from the storage module, and then the model parameters of the prediction model in the current round of training process may be adjusted based on the cross entropy.
And step 604, obtaining a second instruction prediction model through multiple rounds of training of the prediction model.
In one possible design, the accompanying robot may further include a speech synthesis module, and the speech synthesis module may be configured to extract an audio feature of the template speech, and, upon receiving a speech playing instruction issued by the processor, issue a speech based on the extracted audio feature of the template speech.
The template voice can be manually input by a person or can be led in from an external device through an external interface of the accompanying robot.
In one possible design, the accompanying robot may further include a distance detection module, a moving module; the distance detection module can be used for detecting the distance between the accompanying robot and the user and sending the distance to the mobile module, and the mobile module is used for controlling the accompanying robot to move towards the position where the user is located when the distance sent by the detection module is detected to be greater than the preset distance.
The mobile module can shoot images containing the user through a camera installed on the accompanying robot when detecting the distance between the accompanying robot and the user, then determines the position of the user in the images based on image segmentation, and then detects the distance between the accompanying robot and the user through an infrared distance measuring mode based on an infrared device installed on the accompanying robot.
In one possible design, the accompanying robot may further include a cleaning module, and the cleaning module is configured to control the cleaning robot connected to the accompanying robot to perform cleaning when receiving the cleaning instruction sent by the processor.
Wherein, cleaning machine people can be connected with accompanying and attending to robot through embedded mode. The main body of the accompanying robot and the main body of the cleaning robot can be provided with infrared positioning devices, and after the cleaning robot finishes cleaning, the accompanying robot is connected with the accompanying robot again based on the infrared positioning devices.
In a possible design, the accompanying robot can also be connected with external equipment, and the heart rate, the blood pressure and the like of a user are detected through external equipment, and when the heart rate, the blood pressure and the like are detected to be abnormal, alarm information is sent to the equipment bound in advance through an alarm module.
Based on the same concept, the present application further provides a control method for an accompanying robot, and referring to fig. 7, a flow diagram of the control method for the accompanying robot provided by the embodiment of the present application is shown, and the method includes the following steps:
step 701, receiving the biological characteristic information of the user.
Step 702, based on the biological characteristic information, the emotion of the user is predicted.
And 703, inputting the predicted emotion into a pre-trained first instruction prediction model, and determining a control instruction matched with the biological characteristic information.
And step 704, controlling the accompanying robot to perform operation corresponding to the control instruction.
The present application further provides a control device of an accompanying robot, as shown in fig. 8, an architecture schematic diagram of the control device of the accompanying robot provided in the embodiment of the present application includes a receiving module 801, a predicting module 802, a determining module 803, and a control module 804, specifically:
a receiving module 801, configured to receive biometric information of a user;
a prediction module 802 for predicting an emotion of the user based on the biometric information;
a determining module 803, configured to input the predicted emotion into a pre-trained first instruction prediction model, and determine a control instruction matching the biometric information;
and the control module 804 is used for controlling the accompanying robot to perform an operation corresponding to the control instruction.
According to the accompanying robot and the accompanying robot control method and device, the biological feature information of a user is collected through the biological feature collection module arranged on the accompanying robot, emotion prediction is conducted according to the biological feature information based on the processor arranged on the robot, the predicted emotion is input into the first instruction prediction model trained in advance, and the control instruction matched with the biological feature information is determined.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. Referring to fig. 9, a schematic structural diagram of an electronic device 900 provided in the embodiment of the present application includes a processor 901, a memory 902, and a bus 903. The memory 902 is used for storing execution instructions, and includes a memory 9021 and an external memory 9022; the memory 9021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 901 and data exchanged with an external memory 9022 such as a hard disk, the processor 901 exchanges data with the external memory 9022 through the memory 9021, and when the electronic device 900 is operated, the processor 901 communicates with the memory 902 through the bus 903, so that the processor 901 executes the following instructions:
receiving biometric information of a user;
predicting an emotion of the user based on the biometric information;
inputting the predicted emotion into a first instruction prediction model trained in advance, and determining a control instruction matched with the biological characteristic information;
and controlling the accompanying robot to perform an operation corresponding to the control instruction.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for controlling a nursing robot in any one of the above embodiments is performed.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when the computer program on the storage medium is executed, the steps of the accompanying robot control method can be executed, so that the accuracy of the demand instruction prediction of the user is improved.
The computer program product for performing the control method of the accompanying robot provided in the embodiment of the present application includes a computer-readable storage medium storing a nonvolatile program code executable by a processor, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, and is not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A robot for accompanying and attending is characterized in that the robot for accompanying and attending comprises a biological characteristic acquisition module and a processor;
the biological characteristic acquisition module is used for acquiring biological characteristic information of a user every preset time and transmitting the biological characteristic information to the processor;
the processor is used for receiving the biological characteristic information transmitted by the biological characteristic acquisition module and predicting the emotion of the user based on the biological characteristic information; inputting the predicted emotion into a pre-trained first instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction;
the accompanying robot further includes: a receiving module and a storage module;
the receiving module is used for receiving an instruction of a user and transmitting the instruction to the processor;
the processor is further configured to: when the instruction transmitted by the receiving module is received, controlling the biological characteristic acquisition module to acquire the biological characteristic information of the user; determining behavior information of the user based on the biological characteristic information, and transmitting the instruction, the time for receiving the instruction, and the behavior information to the storage module; the behavior information of the user is used for representing the activity state of the user;
the storage module is used for storing the instruction, the time for receiving the instruction and the behavior information;
the processor is further configured to train the first instruction prediction model by:
instructions for obtaining at least one historical emotion prediction result of the user and associating each historical emotion prediction result;
inputting the historical emotion prediction result into an instruction prediction model to be trained to obtain a prediction instruction corresponding to the historical emotion prediction result;
performing the current round of training on the instruction prediction model to be trained based on the prediction instruction and the instruction associated with the historical emotion prediction result, wherein the cross entropy loss in the current round of training is determined based on the prediction instruction and the instruction associated with the historical emotion prediction result, and then the model parameters of the instruction prediction model in the current round of training are adjusted based on the cross entropy loss;
obtaining the first instruction prediction model through multi-round training of the instruction prediction model;
the processor, after receiving the biometric information, is further configured to:
determining behavior information of the user based on the biometric information;
inputting the behavior information of the user and the time for receiving the biological characteristic information into a pre-trained second instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction.
2. The accompany robot as recited in claim 1, the biometric acquisition module comprising at least one of:
the system comprises a sound acquisition module and an image acquisition module;
for the case that the biometric acquisition module comprises a sound acquisition module, the biometric information comprises sound information;
for the case where the biometric acquisition module comprises an image acquisition module, the biometric information comprises facial image information.
3. The accompany robot as recited in claim 2, the processor, when predicting the mood of the user based on the biometric information, being specifically configured to:
extracting voice features in the voice information, wherein the voice features comprise short-time energy, a short-time zero-crossing rate, fundamental tone frequency, formant features, speech speed and Mel cepstrum coefficients;
inputting the voice features into a voice recognition submodel to obtain a first score of any preset emotion of the voice features;
inputting the sound information into a semantic identifier model, and extracting semantic keywords in the sound information;
determining a second score of the sound information belonging to any one preset emotion based on the semantic keyword;
inputting the facial image information into a facial recognizer model, and determining a third score of any one preset emotion of the facial image information;
and carrying out weighted summation on the first score, the second score and the third score according to preset weights, and determining the emotion of the user based on the summed scores.
4. The accompany robot as recited in claim 1, wherein the processor trains the second instruction prediction model according to the following method:
acquiring an instruction stored by the storage module, time for receiving the instruction and behavior information corresponding to the instruction;
inputting the time for receiving the instruction and behavior information corresponding to the instruction into a prediction model to be trained, and outputting to obtain a prediction instruction;
performing a current round of training on the prediction model based on the instruction obtained from the storage module and the prediction instruction;
and obtaining the second instruction prediction model through multi-round training of the prediction model.
5. The accompanying robot as recited in claim 1, further comprising: an alarm module;
and the alarm module is used for comparing the behavior information with the abnormal behavior information stored in the storage module, and sending alarm information to the pre-bound equipment when the comparison is successful.
6. A method for controlling a robot for accompanying and attending, comprising:
receiving biometric information of a user;
predicting an emotion of the user based on the biometric information;
inputting the predicted emotion into a first instruction prediction model trained in advance, and determining a control instruction matched with the biological characteristic information;
controlling the accompanying robot to perform an operation corresponding to the control instruction;
instructions for obtaining at least one historical emotion prediction result of the user and associating each historical emotion prediction result;
inputting the historical emotion prediction result into an instruction prediction model to be trained to obtain a prediction instruction corresponding to the historical emotion prediction result;
performing the current round of training on the instruction prediction model to be trained based on the prediction instruction and the instruction associated with the historical emotion prediction result, wherein the cross entropy loss in the current round of training is determined based on the prediction instruction and the instruction associated with the historical emotion prediction result, and then the model parameters of the instruction prediction model in the current round of training are adjusted based on the cross entropy loss;
obtaining the first instruction prediction model through multi-round training of the instruction prediction model; determining behavior information of the user based on the biometric information;
inputting the behavior information of the user and the time for receiving the biological characteristic information into a pre-trained second instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction.
7. A control device for an accompanying robot, comprising:
the receiving module is used for receiving the biological characteristic information of the user;
a prediction module for predicting an emotion of the user based on the biometric information;
the determining module is used for inputting the predicted emotion into a first instruction prediction model trained in advance and determining a control instruction matched with the biological characteristic information;
the control module is used for controlling the accompanying robot to perform operation corresponding to the control instruction;
the control device is further configured to: instructions for obtaining at least one historical emotion prediction result of the user and associating each historical emotion prediction result;
inputting the historical emotion prediction result into an instruction prediction model to be trained to obtain a prediction instruction corresponding to the historical emotion prediction result;
performing the current round of training on the instruction prediction model to be trained based on the prediction instruction and the instruction associated with the historical emotion prediction result, wherein the cross entropy loss in the current round of training is determined based on the prediction instruction and the instruction associated with the historical emotion prediction result, and then the model parameters of the instruction prediction model in the current round of training are adjusted based on the cross entropy loss;
obtaining the first instruction prediction model through multi-round training of the instruction prediction model;
determining behavior information of the user based on the biometric information;
inputting the behavior information of the user and the time for receiving the biological characteristic information into a pre-trained second instruction prediction model, determining a control instruction matched with the biological characteristic information, and controlling the accompanying robot to perform an operation corresponding to the control instruction.
CN201910848250.3A 2019-09-09 2019-09-09 Accompanying robot, accompanying robot control method and accompanying robot control device Active CN110480656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910848250.3A CN110480656B (en) 2019-09-09 2019-09-09 Accompanying robot, accompanying robot control method and accompanying robot control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910848250.3A CN110480656B (en) 2019-09-09 2019-09-09 Accompanying robot, accompanying robot control method and accompanying robot control device

Publications (2)

Publication Number Publication Date
CN110480656A CN110480656A (en) 2019-11-22
CN110480656B true CN110480656B (en) 2021-09-28

Family

ID=68557031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910848250.3A Active CN110480656B (en) 2019-09-09 2019-09-09 Accompanying robot, accompanying robot control method and accompanying robot control device

Country Status (1)

Country Link
CN (1) CN110480656B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111312221B (en) * 2020-01-20 2022-07-22 宁波舜韵电子有限公司 Intelligent range hood based on voice control
CN112060080A (en) * 2020-07-31 2020-12-11 深圳市优必选科技股份有限公司 Robot control method and device, terminal equipment and storage medium
CN113273930A (en) * 2021-06-04 2021-08-20 李侃 Floor sweeping robot integrating intelligent rescue function and control method thereof
CN113246156A (en) * 2021-07-13 2021-08-13 武汉理工大学 Child accompanying robot based on intelligent emotion recognition and control method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100499770B1 (en) * 2004-12-30 2005-07-07 주식회사 아이오. 테크 Network based robot control system
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN105739688A (en) * 2016-01-21 2016-07-06 北京光年无限科技有限公司 Man-machine interaction method and device based on emotion system, and man-machine interaction system
CN106182032A (en) * 2016-08-24 2016-12-07 陈中流 One is accompanied and attended to robot
CN107103269A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 One kind expression feedback method and intelligent robot
CN108877840A (en) * 2018-06-29 2018-11-23 重庆柚瓣家科技有限公司 Emotion identification method and system based on nonlinear characteristic
CN109571494A (en) * 2018-11-23 2019-04-05 北京工业大学 Emotion identification method, apparatus and pet robot
CN109767791A (en) * 2019-03-21 2019-05-17 中国—东盟信息港股份有限公司 A kind of voice mood identification and application system conversed for call center

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100499770B1 (en) * 2004-12-30 2005-07-07 주식회사 아이오. 테크 Network based robot control system
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN105739688A (en) * 2016-01-21 2016-07-06 北京光年无限科技有限公司 Man-machine interaction method and device based on emotion system, and man-machine interaction system
CN107103269A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 One kind expression feedback method and intelligent robot
CN106182032A (en) * 2016-08-24 2016-12-07 陈中流 One is accompanied and attended to robot
CN108877840A (en) * 2018-06-29 2018-11-23 重庆柚瓣家科技有限公司 Emotion identification method and system based on nonlinear characteristic
CN109571494A (en) * 2018-11-23 2019-04-05 北京工业大学 Emotion identification method, apparatus and pet robot
CN109767791A (en) * 2019-03-21 2019-05-17 中国—东盟信息港股份有限公司 A kind of voice mood identification and application system conversed for call center

Also Published As

Publication number Publication date
CN110480656A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110480656B (en) Accompanying robot, accompanying robot control method and accompanying robot control device
US10516938B2 (en) System and method for assessing speaker spatial orientation
TWI403304B (en) Method and mobile device for awareness of linguistic ability
Busso et al. Iterative feature normalization scheme for automatic emotion detection from speech
CN108197115A (en) Intelligent interactive method, device, computer equipment and computer readable storage medium
US11837249B2 (en) Visually presenting auditory information
WO2017112813A1 (en) Multi-lingual virtual personal assistant
JP7063779B2 (en) Speech dialogue system, speech dialogue method, program, learning model generator and learning model generation method
WO2017100334A1 (en) Vpa with integrated object recognition and facial expression recognition
CN111696559B (en) Providing emotion management assistance
JP2004310034A (en) Interactive agent system
KR102314213B1 (en) System and Method for detecting MCI based in AI
CN111475206B (en) Method and apparatus for waking up wearable device
US10789961B2 (en) Apparatus and method for predicting/recognizing occurrence of personal concerned context
CN109036395A (en) Personalized speaker control method, system, intelligent sound box and storage medium
CN112102850A (en) Processing method, device and medium for emotion recognition and electronic equipment
CN114708869A (en) Voice interaction method and device and electric appliance
Usman et al. Heart rate detection and classification from speech spectral features using machine learning
CN109074809B (en) Information processing apparatus, information processing method, and computer-readable storage medium
JP4631464B2 (en) Physical condition determination device and program thereof
KR20220005232A (en) Method, apparatur, computer program and computer readable recording medium for providing telemedicine service based on speech recognition
KR20230154380A (en) System and method for providing heath-care services fitting to emotion states of users by behavioral and speaking patterns-based emotion recognition results
CN114492579A (en) Emotion recognition method, camera device, emotion recognition device and storage device
Zubiaga et al. Mental Health Monitoring from Speech and Language
CN112308379A (en) Service order evaluation method, device, equipment and storage medium for home care

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant