WO2019132063A1 - Système d'apprentissage de service de robot et procédé associé - Google Patents

Système d'apprentissage de service de robot et procédé associé Download PDF

Info

Publication number
WO2019132063A1
WO2019132063A1 PCT/KR2017/015622 KR2017015622W WO2019132063A1 WO 2019132063 A1 WO2019132063 A1 WO 2019132063A1 KR 2017015622 W KR2017015622 W KR 2017015622W WO 2019132063 A1 WO2019132063 A1 WO 2019132063A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
robot
data
reaction
affinity
Prior art date
Application number
PCT/KR2017/015622
Other languages
English (en)
Korean (ko)
Inventor
송세경
이상묵
조덕현
Original Assignee
(주)퓨처로봇
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)퓨처로봇 filed Critical (주)퓨처로봇
Publication of WO2019132063A1 publication Critical patent/WO2019132063A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Definitions

  • the present invention relates to a robot service learning system based on interaction between a user and a robot and a method thereof.
  • Intelligent service robots provide various services to human beings in home and various industrial fields, organically combine with social network of information age, and serve as a human friendly interface that can be remotely controlled with home appliances.
  • Service robots perform most tasks that are performed differently from other robots, interacting with people.
  • the housekeeping robot performs routine housekeeping tasks assigned to it and processes the tasks requested by a person in real time. And the processed work informs the person of the result or the processed result is delivered to the person. Therefore, in order to produce a service robot, it is basically necessary to have a technique for processing not only a general robot manufacturing technique such as walking or moving but also an interaction according to a person's situation or emotion.
  • An object of the present invention is to provide a more robust user-customized robot service by understanding the emotion and reaction of the user through interaction between the user and the robot and adjusting and learning the behavior probability of the robot suitable for the user.
  • a robot service learning system includes a recognition data collection unit for collecting user recognition data indicating emotion of a user; A reaction data selection instruction unit for selecting robot reaction data according to the user recognition data in the robot reaction database and instructing execution; And a machine learning unit for analyzing the user's interest, familiarity, and affinity with respect to the robot from the user reaction data indicating the reaction of the user by the execution of the robot reaction data to adjust the selection probability of the robot reaction data.
  • the user recognition data may include at least one of a facial expression, a voice content, a voice tone, and a motion of a user
  • the user reaction data may include at least one of facial expression, voice content, voice tone, Implicit feedback data including at least one of a line of sight of the user; And explicit feedback data indicating an affirmative or negative and an evaluation value received directly from the user.
  • a user identification unit for identifying a user through the user identification data and classifying the user as a registered user or an unregistered user; And a reaction data collecting unit collecting the user reaction data.
  • the response data selection instruction unit may compare the user recognition data with previously stored recognition reference data and, if the user recognition data does not belong to the recognition reference data, instruct the user reaction belonging to the recognition reference data to be further instructed can do.
  • the machine learning unit may calculate the degree of interest and intimacy for analyzing the user's interest and intimacy with respect to the robot based on the reaction time of the user according to the execution of the robot reaction data, the accumulated time of the interaction between the robot and the user, Analysis section;
  • An affinity analyzer for assigning an affinity score to the robot reaction data according to a predetermined affinity criterion and labeling the affinity score to the robot reaction data;
  • An analysis result storage unit for classifying analysis results of the interest degree and the affinity according to registered users and unregistered users, and storing the analyzed results;
  • a response data selection probability adjuster for adjusting the display intensity and frequency of the robot response according to the degree of interest and the analysis result of the affinity and the degree of affirmation and adjusting the selection probability of the robot response;
  • a reaction data setting unit for setting the robot reaction data whose selection probability is adjusted through the reaction data selection probability adjusting unit with respect to the user reaction data.
  • the response data selection probability controller may gradually increase the intensity and frequency of the response to the robot response data as the degree of interest, the analysis result of the affinity, and the degree of affinity for the robot response data are higher, The lower the positive score, the lower the intensity and frequency of the robot response.
  • a robot service learning method comprising: a recognition data collection step of collecting user recognition data indicating a feeling of a user; A reaction data selection instruction step of selecting robot reaction data according to the user recognition data in the robot reaction database and instructing execution; And a machine learning step of adjusting the selection probability of the robot response data by analyzing the degree of interest, intimacy, and affinity of the robot from user response data indicating the reaction of the user by executing the robot response data.
  • the user response data may include at least one of a facial expression, a voice content, a voice tone, and a motion of a user, Implicit feedback data including at least one of the line of sight; And explicit feedback data indicating an affirmative or negative and an evaluation value received directly from the user.
  • a user identification step of identifying a registered user and an unregistered user by identifying the user through the user identification data
  • a reaction data collection step of collecting the user reaction data after the reaction data selection instruction step.
  • the response data selection instruction step may include a step of comparing the user recognition data with previously stored recognition reference data and, if the user recognition data does not belong to the recognition reference data, You can tell.
  • the machine learning step may include a step of analyzing the user's interest and intimacy with respect to the robot based on the reaction time of the user according to the execution of the robot reaction data, the accumulation time of the interaction with the robot and the user, And an affinity analysis step;
  • An affinity analyzing step of assigning a positive degree score to the robot reaction data according to a predetermined affinity criterion and labeling the affinity score to the robot reaction data;
  • a reaction data selection probability adjustment step of adjusting the display intensity and frequency of the robot response according to the degree of interest and the analysis result of the affinity and the degree of affirmation and adjusting the selection probability of the robot response;
  • a reaction data setting step of setting robot reaction data whose selection probability is adjusted through the reaction data selection probability adjuster with respect to the user reaction data.
  • reaction data selection probability adjusting step gradually increases the display intensity and frequency of the robot response data as the degree of interest, the analysis result value of the intimacy, and the degree of affinity for the robot response data become higher, The lower the positive score, the lower the intensity and frequency of the robot response.
  • an emotion and reaction of a user can be understood through interaction between a user and a robot, and a behavioral probability of a robot suitable for a user can be adjusted and learned, thereby providing a more enhanced user-customized robot service.
  • FIG. 1 is a view showing a robot to which a robot service learning system according to an embodiment of the present invention is applied.
  • FIG. 2 is a block diagram illustrating a configuration of a robot service learning system according to an embodiment of the present invention.
  • FIG. 3 is a block diagram showing a configuration of a machine learning unit according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a robot service learning method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a machine learning step according to an embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating a detailed configuration of a robot service learning method according to an embodiment of the present invention.
  • FIG. 1 is a view showing a robot to which a robot service learning system according to an embodiment of the present invention is applied.
  • a robot 10 may include a head part 11 and a body part 12.
  • the head unit 11 may include a head display unit 11a, a head driving unit 11b, a user recognition sensor unit 11c, and a user detection sensor unit 11d.
  • the head display unit 11a is installed on the face of the robot 10 to graphically display the avatar face and express an appropriate expression of the avatar according to the state of the robot 10 or the reaction of the user 1, When the voice of the user 10 is outputted, the mouth shape of the avatar according to the voice can also be expressed.
  • the head driving unit 11b is a means for driving the neck joint connecting the head unit 11 and the body unit 12.
  • the head driving unit 11b may be driven to realize a proper gesture according to the reaction of the robot 10.
  • the user recognition sensor unit 11c can sense facial expression information, voice information, and the like of the user 1.
  • the user recognition sensor unit 11c may include hardware means such as a camera sensor and a microphone, and software means for classifying data sensed by the hardware means into various user recognition data for each predetermined characteristic.
  • the user detection sensor unit 11d can detect whether the user 1 is approaching and motion (action).
  • the user detection sensor unit 11d may include a hardware means such as an approach sensor and a camera sensor, and a software means for classifying the data sensed by the hardware means into various user recognition data for each predetermined characteristic.
  • the user recognition data classified through the user recognition sensor unit 11c and the user detection sensor unit 11d are collected and analyzed through the robot service learning system 100 of the present embodiment and based on the analysis result, And reinforcement learning for providing an optimal robot service according to emotions can be performed.
  • the body part 12 may include at least one of an arm driving part 12, a waist driving part 12b, an obstacle detection sensor part 12c, a moving part 12d and a touch screen 12e.
  • the arm driving unit 12 is a means for driving the arm joint of the robot 10 and may be driven so that a proper gesture corresponding to the reaction of the robot 10 is realized.
  • the waist driving part 12b is a means for driving the waist joint of the robot 10 and may be driven so that a proper gesture corresponding to the reaction of the robot 10 is realized.
  • the obstacle detection sensor unit 12c may include an ultrasonic sensor for sensing an obstacle between movements of the robot 10.
  • the moving unit 12d may move or switch in the direction of the user 1 according to the detection of the user 1 or move to avoid the obstacle according to the detection of the obstacle.
  • the moving unit 12d may include a driving motor, a wheel, and the like.
  • the touch screen 12e may be installed in front of the arm driving unit 12 or the body unit 12 and may retrieve or input necessary information from the user 1 or output various information.
  • the configuration of the robot 10 is merely an example that can be applied to the robot service learning system 10 of the present embodiment, and can be changed into various structures and forms.
  • FIG. 2 is a block diagram illustrating a configuration of a robot service learning system according to an embodiment of the present invention
  • FIG. 3 is a block diagram illustrating a configuration of a machine learning unit according to an embodiment of the present invention.
  • the robot service learning system 100 based on the interaction between a user and a robot according to the present invention includes a recognition data collection unit 110, a user identification unit 120, a response data selection instruction unit 130, A reaction data collection unit 140, and a machine learning unit 150.
  • the recognition data collection unit 110 may collect user recognition data indicating emotion of the user.
  • the user recognition data may include at least one of a facial expression, a voice content, a voice tone, and a motion of the user, and may be collected from the user recognition sensor unit 11c and the user detection sensor unit 11d of the robot 10 .
  • the user identification unit 120 identifies the user through the user recognition data collected through the recognition data collection unit 110 and classifies the user as a known user or an unknown user so that a registered user or an unregistered user Can be recognized.
  • the registered user means a user registered in advance to receive the service of the robot 10
  • the unregistered user means a user who is initially provided with the service of the robot 10 or a user not registered in the user database .
  • the user database may be separately constructed in the memory for providing the service of the robot 10.
  • the user identification unit 120 may manage an unregistered user as a registered user after performing the robot service learning process of the present embodiment.
  • the reaction data selection instruction unit 130 can select the robot reaction data according to the user recognition data in the robot reaction database (not shown) and direct the robot reaction data to be executed. For example, when it is determined that the user is currently enjoying the present based on the user's facial expression, voice contents, voice tone, motion, etc. contained in the user recognition data, , And gives an instruction to the robot 10 to execute the selected robot reaction data.
  • the user identification unit 120 may first identify whether the subject of the current user recognition data is a registered user or an unregistered user. Since the learned robot reaction data is set according to the identified user's situation and emotion identified in the case of the registered user, the robot reaction data set is selected, and the robot 10 executes a specific action or reaction according to the selected robot reaction data . That is, the reaction of the robot 10 according to any situation or emotional state is set for each registered user.
  • an unregistered user it is possible to select common robot reaction data according to the situation and the emotion of the unidentified user, and instruct the robot 10 to execute actions or reactions according to the selected robot reaction data. That is, for the unregistered user, the reaction of the robot 10 that can be commonly applied according to the specific situation or the emotional state of the registered users is set. For example, it is possible to instruct the robot 10 to select a reaction in which the robots 10 are commonly set with respect to a situation or an emotion of the user ' pleasure '
  • the response data selection instruction unit 130 compares the user recognition data with previously stored recognition reference data, and when the user recognition data does not belong to the recognition reference data, . ≪ / RTI > That is, the response data selection instruction unit 130 has a recognition criterion set in advance for the user's emotional state, and compares the collected user recognition data with a preset recognition criterion to judge, classify, or identify any emotional state (Not suitable as user recognition data), the user can instruct the robot 10 to re-induce the reaction of the user's facial expression, voice, motion, etc. so that the user's emotional state can be clearly recognized . For example, if the user's recognition data for the facial expression of the user is collected and the facial expression or the angry facial expression of the user is not clearly distinguished from the corresponding recognition data, You can give instructions to encourage you to build it once again.
  • the reaction data collection unit 140 may collect user reaction data indicating the reaction of the user by executing the robot reaction data.
  • the user reaction data may refer to data related to a user response to the reaction when the robot 10 has reacted. These user response data can be classified into implicit feedback data and explicit feedback data.
  • the implicit feedback data may include at least one of facial expressions of a user, voice contents, voice tones, volume, pulse, and eye lines, and feedback data such as facial expressions of the user, voice contents, voice tones, Can be collected from the user recognition sensor unit 11c and the user detection sensor unit 11d and the feedback data such as the amount of the user's breath and pulse can be collected in association with the wearable device worn by the user or the health application of the portable terminal .
  • the robot 10 of the present embodiment is a home robot used only by a small number of specific individuals
  • information registered in the personal user account and schedule information through the SNS application and the schedule application installed in the portable terminal of the individual user May be further collected as implicit feedback data.
  • the added implicit feedback data may also be converted into numerical values to affect the selection or selection probability of the robot response data.
  • the explicit feedback data indicates an affirmative or negative and may mean an evaluation value directly input from a user.
  • a selection button indicating the affirmative and negative states of 'likes' and 'dislikes' is output through the touch screen 12e to the user, By receiving one button input, the evaluation value of positive or negative can be inputted.
  • the machine learning unit 150 may adjust the selection probability of the robot reaction data by analyzing the degree of interest (or concentration), familiarity and affinity of the user 10 with respect to the robot 10 from the user reaction data.
  • the machine learning unit 150 includes an interest and affinity analyzing unit 151, an affinity analyzing unit 152, an analysis result storing unit 153, a reaction data selection probability adjusting unit 154, and a reaction data setting unit 155 ). ≪ / RTI >
  • the degree of interest and intimacy analyzer 151 calculates the degree of interaction between the robot 1 and the user 1 based on the reaction time of the user 1 according to the execution of the robot reaction data, (Or concentration) and intimacy of the user.
  • the degree of interest can be calculated as a value for each using the reference time table for the reaction time of the user 1, the cumulative interaction time between the robot 10 and the user 1, have.
  • the affinity may be weighted according to a numerical range of interest (or concentration), and may be calculated to a predetermined level according to the cumulative weight. Also, the intimacy level can be changed in accordance with the affinity (or concentration) as well as the affinity score. For example, the affinity for certain user response data may be weighted according to the degree of increase / decrease every time the affirmation score for the response data is increased or decreased by a predetermined unit, and the level may be changed according to the accumulated weight .
  • the affinity analyzer 152 may assign a positive degree score to the robot reaction data according to a predetermined affinity criterion and label the affinity score to the robot reaction data. More specifically, the affinity analyzer 152 determines whether the information on the user's facial expression, voice content, voice tone, volume, pulse, eye gaze included in the user reaction data includes positive elements or negative Element is included, and the affinity score is matched and stored in each of the robot reaction data that has induced the user reaction.
  • the storage unit 153 may classify analysis results of interest and intimacy according to registered users and non-registered users, respectively. More specifically, in the case of a registered user, analysis result values of interest degree and intimacy degree can be stored for each robot reaction data in a robot reaction database managed for each registered user, and according to the degree of interest and intimacy level, So that a persona of the robot 10 can be formed.
  • the affinity score may be labeled and stored together with the robot reaction data to be stored.
  • robot reaction data having a high level of interest and an intimacy level can be stored as a common robot behavior model. The results of the analysis of interest and intimacy can be used to control the selection probability of robot response data.
  • the response data selection probability adjuster 154 may adjust the display intensity and the frequency of the robot response according to the analysis result of the degree of interest and the affinity and the score of the affinity, and may adjust the selection probability of the robot response. More specifically, it is possible to increase the intensity and frequency of robot response data with high affinity, intimacy and affinity, and to reduce the intensity and frequency of robot response data with low affinity, affinity, and affinity.
  • the degree of interest, familiarity and affirmation of the robot response related to 'enjoyment' For example, when the interaction between the user 1 and the robot 10 is continuously performed and the interaction progresses over time, for example, the degree of interest, familiarity and affirmation of the robot response related to 'enjoyment' If the degree gradually increases, the display intensity of the robot for 'enjoyment' can be gradually increased, the display frequency can be gradually increased, and the probability of selection for the robot response can be gradually increased.
  • increasing the selection probability of the robot response means that the probability that the response of the specific robot 1 is selected becomes higher according to the user's specific situation and the emotional state, Can be updated through machine learning in the process of repeatedly performing the interaction between the robot 10 and the robot 10.
  • the display intensity of the robot with respect to 'pleasure' can be gradually lowered and the frequency of expression can be gradually decreased. Can be lowered.
  • lowering the selection probability for the robot response means that the probability of selecting a reaction of a specific robot 1 is lowered in a user's specific situation and emotional state, Can be updated through machine learning in the process of repeatedly performing the interaction between the robots 10.
  • the reaction data selection probability adjusting unit 154 may adjust the selection probability of the response of each robot 10 so that the degree of interest and the level of intimacy between the user 1 and the robot 10 are maintained at a predetermined threshold value or more . That is, the process of interaction between the user 1 and the robot 10 is managed on a time axis, monitors whether the level of interest and the intimacy level are below a threshold value, and if the value falls below a threshold value, By decreasing the frequency value, the intensity and frequency of the response to the robot reaction can be maintained above the threshold by controlling the intensity and frequency of the response to the robot response.
  • the reaction data setting unit 155 may set robot reaction data whose selection probability is adjusted through the reaction data selection probability adjusting unit 154 for each user reaction data. That is, at least one robot reaction data is set for any user reaction data, and as the selection probability for a plurality of robot reaction data continuously changes, data setting can be performed such that the most appropriate robot reaction is selected according to the user reaction have.
  • FIG. 4 is a flowchart illustrating a method of learning a robot service according to an embodiment of the present invention
  • FIG. 5 is a flowchart illustrating a machine learning step according to an embodiment of the present invention
  • FIG. 6 is a flowchart illustrating a robot service learning
  • Fig. 8 is a flowchart showing a detailed configuration of the method.
  • a robot service learning method (S100) based on interaction between a user and a robot according to the present invention includes a recognition data collection step (S110), a user identification step (S120), a reaction data selection indication step ), A reaction data collection step (S140), and a machine learning step (S150).
  • the user recognition data indicating the user's emotions may be collected.
  • the user recognition data may include at least one of a facial expression, a voice content, a voice tone, and a motion of the user, and may be collected from the user recognition sensor unit 11c and the user detection sensor unit 11d of the robot 10 .
  • the user is identified through the user recognition data collected through the recognition data collection step (S110) and classified as a known user or an unknown user so that the registered user or unregistered user Can be recognized.
  • the registered user means a user registered in advance to receive the service of the robot 10
  • the unregistered user means a user who is initially provided with the service of the robot 10 or a user not registered in the user database .
  • the user database may be separately constructed in the memory for providing the service of the robot 10.
  • the unregistered user can be managed as a registered user after performing the robot service learning method S100 of the present embodiment.
  • the robot response database may select the robot response data according to the user recognition data and instruct execution. For example, when it is determined that the user is currently enjoying the present based on the user's facial expression, voice contents, voice tone, motion, etc. contained in the user recognition data, , And gives an instruction to the robot 10 to execute the selected robot reaction data.
  • the user identification step S120 may be performed to identify whether the subject of the current user recognition data is a registered user or an unregistered user. Since the learned robot reaction data is set according to the identified user's situation and emotion identified in the case of the registered user, the robot reaction data set is selected, and the robot 10 executes a specific action or reaction according to the selected robot reaction data . That is, the reaction of the robot 10 according to any situation or emotional state is set for each registered user.
  • an unregistered user it is possible to select common robot reaction data according to the situation and the emotion of the unidentified user, and instruct the robot 10 to execute actions or reactions according to the selected robot reaction data. That is, for the unregistered user, the reaction of the robot 10 that can be commonly applied according to the specific situation or the emotional state of the registered users is set. For example, it is possible to instruct the robot 10 to select a reaction in which the robots 10 are commonly set with respect to a situation or an emotion of the user ' pleasure '
  • the user recognition data is compared with previously stored recognition reference data, and when the user recognition data does not belong to the recognition reference data, the robot 10 ). That is, in the response data selection instruction step (S130), a recognition criterion preliminarily defined for the user's emotion state is set, and the collected user recognition data is compared with a preset recognition criterion to determine, (Not suitable as user recognition data), the user can instruct the robot 10 to re-induce the reaction of the user's facial expression, voice, motion, etc. so that the user's emotional state can be clearly recognized have. For example, if the user's recognition data for the facial expression of the user is collected and the facial expression or the angry facial expression of the user is not clearly distinguished from the corresponding recognition data, You can give instructions to encourage you to build it once again.
  • reaction data collection step (S140) user reaction data indicating the reaction of the user by execution of the robot reaction data can be collected.
  • the user reaction data may refer to data related to a user response to the reaction when the robot 10 has reacted. These user response data can be classified into implicit feedback data and explicit feedback data.
  • the implicit feedback data may include at least one of facial expressions of a user, voice contents, voice tones, volume, pulse, and eye lines, and feedback data such as facial expressions of the user, voice contents, voice tones, Can be collected from the user recognition sensor unit 11c and the user detection sensor unit 11d and the feedback data such as the amount of the user's breath and pulse can be collected in association with the wearable device worn by the user or the health application of the portable terminal .
  • the robot 10 of the present embodiment is a home robot used only by a small number of specific individuals
  • information registered in the personal user account and schedule information through the SNS application and the schedule application installed in the portable terminal of the individual user May be further collected as implicit feedback data.
  • the added implicit feedback data may also be converted into numerical values to affect the selection or selection probability of the robot response data.
  • the explicit feedback data indicates an affirmative or negative and may mean an evaluation value directly input from a user.
  • a selection button indicating the affirmative and negative states of 'likes' and 'dislikes' is output through the touch screen 12e to the user, By receiving one button input, the evaluation value of positive or negative can be inputted.
  • the machine learning step S150 the degree of interest (or concentration), familiarity and affinity of the user 10 with respect to the robot 10 can be analyzed from the user reaction data, and the selection probability of the robot reaction data can be adjusted.
  • the machine learning step S150 includes steps S151, S152, S153, S154, and S155, ). ≪ / RTI >
  • the interest degree and intimacy analysis step S151 based on the reaction time of the user 1 according to the execution of the robot reaction data, the accumulated time of the interaction between the robot 10 and the user 1, (Or concentration) and intimacy of the user.
  • the degree of interest can be calculated as a value for each using the reference time table for the reaction time of the user 1, the cumulative interaction time between the robot 10 and the user 1, have.
  • the affinity may be weighted according to a numerical range of interest (or concentration), and may be calculated to a predetermined level according to the cumulative weight. Also, the intimacy level can be changed in accordance with the affinity (or concentration) as well as the affinity score. For example, the affinity for certain user response data may be weighted according to the degree of increase / decrease every time the affirmation score for the response data is increased or decreased by a predetermined unit, and the level may be changed according to the accumulated weight .
  • a positive degree score may be given to the robot reaction data according to a predetermined affinity standard, and a positive affinity score may be labeled to the robot reaction data. More specifically, in the affinity analysis step S152, whether the information on the user's facial expression, voice contents, voice tone, volume, pulse, eye line included in the user reaction data includes positive elements or negative Element is included, and the affinity score is matched and stored in each of the robot reaction data that has induced the user reaction.
  • the analysis result values of the degree of interest and the intimacy can be classified according to the registered users and the non-registered users, respectively, and stored. More specifically, in the case of a registered user, analysis result values of interest degree and intimacy degree can be stored for each robot reaction data in a robot reaction database managed for each registered user, and according to the degree of interest and intimacy level, So that a persona of the robot 10 can be formed.
  • the affinity score may be labeled and stored together with the robot reaction data to be stored.
  • robot reaction data having a high level of interest and an intimacy level can be stored as a common robot behavior model. The results of the analysis of interest and intimacy can be used to control the selection probability of robot response data.
  • the display intensity and frequency for the robot response may be adjusted according to the analysis result of the degree of interest and the intimacy and the score of the affinity, and the selection probability of the robot response may be adjusted. More specifically, it is possible to increase the intensity and frequency of robot response data with high affinity, intimacy and affinity, and to reduce the intensity and frequency of robot response data with low affinity, affinity, and affinity.
  • the degree of interest, familiarity and affirmation of the robot response related to 'enjoyment' For example, when the interaction between the user 1 and the robot 10 is continuously performed and the interaction progresses over time, for example, the degree of interest, familiarity and affirmation of the robot response related to 'enjoyment' If the degree gradually increases, the display intensity of the robot for 'enjoyment' can be gradually increased, the display frequency can be gradually increased, and the probability of selection for the robot response can be gradually increased.
  • increasing the selection probability of the robot response means that the probability that the response of the specific robot 1 is selected becomes higher according to the user's specific situation and the emotional state, Can be updated through machine learning in the process of repeatedly performing the interaction between the robot 10 and the robot 10.
  • the display intensity of the robot with respect to 'pleasure' can be gradually lowered and the frequency of expression can be gradually decreased. Can be lowered.
  • lowering the selection probability for the robot response means that the probability of selecting a reaction of a specific robot 1 is lowered in a user's specific situation and emotional state, Can be updated through machine learning in the process of repeatedly performing the interaction between the robots 10.
  • the selection probability for the reaction of each robot 10 may be adjusted so that the degree of interest and the level of intimacy between the user 1 and the robot 10 are maintained at a predetermined threshold value or more . That is, the process of interaction between the user 1 and the robot 10 is managed on a time axis, monitors whether the level of interest and the intimacy level are below a threshold value, and if the value falls below a threshold value, By decreasing the frequency value, the intensity and frequency of the response to the robot reaction can be maintained above the threshold by controlling the intensity and frequency of the response to the robot response.
  • the robot reaction data whose selection probability is adjusted through the reaction data selection probability adjustment step S154 for each user reaction data can be set. That is, at least one robot reaction data is set for any user reaction data, and as the selection probability for a plurality of robot reaction data continuously changes, data setting can be performed such that the most appropriate robot reaction is selected according to the user reaction have.
  • the robot service learning system and method according to the present invention are only one embodiment of the robot service learning system, and the present invention is not limited to the above embodiments, It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention.
  • 11a a head display unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

La présente invention concerne un système d'apprentissage de robot et un procédé associé. Le problème technique à résoudre consiste à comprendre les émotions et les réactions d'un utilisateur par l'intermédiaire d'une interaction entre l'utilisateur et un robot, et à ajuster et à apprendre des probabilités de comportement du robot qui sont adaptées à l'utilisateur, de sorte à offrir un service de robot personnalisé davantage amélioré. Selon un mode de réalisation, un système d'apprentissage de service de robot comprend : une unité de collecte de données de reconnaissance conçue pour collecter des données de reconnaissance d'utilisateur indiquant les émotions d'un utilisateur ; une unité de commande de sélection de données de réaction conçue pour sélectionner, à partir d'une base de données de réaction de robot, des données de réaction de robot en fonction des données de reconnaissance d'utilisateur et commander l'exécution ; et une unité d'apprentissage automatique conçue pour analyser l'intérêt de l'utilisateur, la familiarité et l'attitude positive par rapport à un robot à partir de données de réaction d'utilisateur indiquant les réactions de l'utilisateur par l'exécution des données de réaction de robot, et ajuster les probabilités de sélection des données de réaction de robot.
PCT/KR2017/015622 2017-12-27 2017-12-28 Système d'apprentissage de service de robot et procédé associé WO2019132063A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0181329 2017-12-27
KR1020170181329A KR102140292B1 (ko) 2017-12-27 2017-12-27 로봇 서비스 학습 시스템 및 그 방법

Publications (1)

Publication Number Publication Date
WO2019132063A1 true WO2019132063A1 (fr) 2019-07-04

Family

ID=67063881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/015622 WO2019132063A1 (fr) 2017-12-27 2017-12-28 Système d'apprentissage de service de robot et procédé associé

Country Status (2)

Country Link
KR (1) KR102140292B1 (fr)
WO (1) WO2019132063A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102196490B1 (ko) * 2020-05-06 2020-12-30 주식회사 서큘러스 인터렉션 로봇 및 사용자 정서 케어를 위한 인터렉션 방법
US20220357721A1 (en) * 2021-05-10 2022-11-10 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for controlling a robot
KR20230057034A (ko) * 2021-10-21 2023-04-28 삼성전자주식회사 로봇 및 이의 제어 방법
KR102506563B1 (ko) 2021-10-22 2023-03-08 (주)인티그리트 자율 주행 기기를 이용한 사용자 맞춤 정보 제공 시스템
WO2024147683A1 (fr) * 2023-01-07 2024-07-11 유한회사 닥터다비드 Système et procédé de traitement d'informations utilisant l'intelligence collective
KR102668933B1 (ko) * 2023-08-10 2024-05-28 (주)로보케어 맞춤형 서비스 제공 장치 및 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100429976B1 (ko) * 2002-02-01 2004-05-03 엘지전자 주식회사 로봇의 행동학습방법
KR100919095B1 (ko) * 2008-01-18 2009-09-28 주식회사 케이티 사용자 자극행동에 따른 로봇 반응행동 수행 방법 및 그로봇
KR101336641B1 (ko) * 2012-02-14 2013-12-16 (주) 퓨처로봇 감성 교감 로봇 서비스 시스템 및 그 방법
KR101618512B1 (ko) * 2015-05-06 2016-05-09 서울시립대학교 산학협력단 가우시안 혼합모델을 이용한 화자 인식 시스템 및 추가 학습 발화 선택 방법
JP6150429B2 (ja) * 2013-09-27 2017-06-21 株式会社国際電気通信基礎技術研究所 ロボット制御システム、ロボット、出力制御プログラムおよび出力制御方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101016381B1 (ko) 2009-01-19 2011-02-21 한국과학기술원 사람과 상호작용이 가능한 감정 표현 로봇
KR101239274B1 (ko) * 2009-07-06 2013-03-06 한국전자통신연구원 상호작용성 로봇
KR20130047276A (ko) 2011-10-31 2013-05-08 주식회사 아이리버 로봇 상호 간의 감정표현방법
KR101772583B1 (ko) * 2012-12-13 2017-08-30 한국전자통신연구원 사용자 상호작용 서비스를 위한 로봇의 동작 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100429976B1 (ko) * 2002-02-01 2004-05-03 엘지전자 주식회사 로봇의 행동학습방법
KR100919095B1 (ko) * 2008-01-18 2009-09-28 주식회사 케이티 사용자 자극행동에 따른 로봇 반응행동 수행 방법 및 그로봇
KR101336641B1 (ko) * 2012-02-14 2013-12-16 (주) 퓨처로봇 감성 교감 로봇 서비스 시스템 및 그 방법
JP6150429B2 (ja) * 2013-09-27 2017-06-21 株式会社国際電気通信基礎技術研究所 ロボット制御システム、ロボット、出力制御プログラムおよび出力制御方法
KR101618512B1 (ko) * 2015-05-06 2016-05-09 서울시립대학교 산학협력단 가우시안 혼합모델을 이용한 화자 인식 시스템 및 추가 학습 발화 선택 방법

Also Published As

Publication number Publication date
KR20190079255A (ko) 2019-07-05
KR102140292B1 (ko) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2019132063A1 (fr) Système d'apprentissage de service de robot et procédé associé
CN110291478B (zh) 驾驶员监视和响应系统
WO2020213762A1 (fr) Dispositif électronique, procédé de fonctionnement de celui-ci et système comprenant une pluralité de dispositifs d'intelligence artificielle
EP0919906B1 (fr) Méthode de contrôle
WO2010126321A2 (fr) Appareil et procédé pour inférence d'intention utilisateur au moyen d'informations multimodes
WO2018082626A1 (fr) Procédé de mise en œuvre d'un système de réalité virtuelle et dispositif de réalité virtuelle
WO2023153604A1 (fr) Procédé de soins d'état psychologique basé sur des informations biométriques et appareil le mettant en oeuvre
WO2018117514A1 (fr) Robot d'aéroport et son procédé de mouvement
WO2020153785A1 (fr) Dispositif électronique et procédé pour fournir un objet graphique correspondant à des informations d'émotion en utilisant celui-ci
WO2019135534A1 (fr) Dispositif électronique et procédé permettant de commander ce dernier
WO2019125082A1 (fr) Dispositif et procédé de recommandation d'informations de contact
KR20160072621A (ko) 학습과 추론이 가능한 로봇 서비스 시스템
WO2019013456A1 (fr) Procédé et dispositif de suivi et de surveillance de crise d'épilepsie sur la base de vidéo
EP3652925A1 (fr) Dispositif et procédé de recommandation d'informations de contact
WO2020141907A1 (fr) Appareil de production d'image permettant de produire une image en fonction d'un mot clé et procédé de production d'image
WO2024101731A1 (fr) Procédé de réalisation d'une consultation personnalisée pour chaque type d'utilisateur à partir d'une intelligence artificielle
EP3685279A1 (fr) Procédé de recherche de contenu et dispositif électronique associé
WO2019221479A1 (fr) Climatiseur et son procédé de commande
WO2019151689A1 (fr) Dispositif électronique et procédé de commande associé
WO2013141667A1 (fr) Système produisant des informations quotidiennes sur la santé et procédé produisant des informations quotidiennes sur la santé
Su et al. An eye tracking system and its application in aids for people with severe disabilities
WO2019054715A1 (fr) Dispositif électronique et son procédé d'acquisition d'informations de rétroaction
US11548144B2 (en) Robot and controlling method thereof
WO2015178549A1 (fr) Procédé et appareil pour la fourniture d'un service de sécurité en utilisant un seuil de stimulation ou moins
CN113661036A (zh) 信息处理装置、信息处理方法以及程序

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936906

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936906

Country of ref document: EP

Kind code of ref document: A1