KR20170105768A - A system for user-robot interaction, and information processing method for the same - Google Patents

A system for user-robot interaction, and information processing method for the same Download PDF

Info

Publication number
KR20170105768A
KR20170105768A KR1020160028824A KR20160028824A KR20170105768A KR 20170105768 A KR20170105768 A KR 20170105768A KR 1020160028824 A KR1020160028824 A KR 1020160028824A KR 20160028824 A KR20160028824 A KR 20160028824A KR 20170105768 A KR20170105768 A KR 20170105768A
Authority
KR
South Korea
Prior art keywords
robot
information
user
situation
surrounding environment
Prior art date
Application number
KR1020160028824A
Other languages
Korean (ko)
Other versions
KR101842963B1 (en
Inventor
최종석
임윤섭
윤상석
박성기
김창환
김동환
Original Assignee
한국과학기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술연구원 filed Critical 한국과학기술연구원
Priority to KR1020160028824A priority Critical patent/KR101842963B1/en
Publication of KR20170105768A publication Critical patent/KR20170105768A/en
Application granted granted Critical
Publication of KR101842963B1 publication Critical patent/KR101842963B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric

Abstract

The present invention relates to a system for user-robot interaction and an information processing method thereof, wherein the system comprises: a comprehensive knowledge management unit containing an instance with respect to information forming a context model and sensing information received by a sensor; a situation inference unit recognizing and inferring a current surrounding environment and a situation based on comprehensive knowledge acquired by interaction with the comprehensive knowledge management unit by corresponding to a received query; and a robot behavior determination unit determining the next behavior of a robot based on the received inference. Therefore, the robot can adaptively respond to situations generated in a human living environment, and user-robot interaction improved in an actual environment can be realized.

Description

SYSTEM AND INFORMATION PROCESSING METHOD FOR USER-ROBOT INTERACTION [0002]

The present invention relates to a system and an information processing method for user-robot interaction, and more particularly, to a system and an information processing method for user-robot interaction that can infer situations through interaction patterns, sensory information, System and an information processing method.

In recent years, technologies have been developed that utilize robots directly or indirectly in various fields. There are also robotic technologies that perform tasks that are generally difficult for people to perform at the factory (such as handling heavy mechanical parts), or that directly interact in human daily life. However, most of the current robot technologies are performing only the simple tasks repeatedly, or performing necessary actions or interactions based on the predetermined scenarios developed through the considerable efforts of the robot developers. For example, a robot arm in a factory repeatedly performs the operations required for a production process according to a pre-inputted code, and recently developed home artificial intelligent robots also perform a predetermined limited operation And there is no logical inference or response to the execution of the interaction beyond the predetermined scenario, or to the new situation and the surrounding environment that are related to the previous situation and the surrounding environment.

However, there are no scenarios for actual human-human interaction, and events that occur every hour are related to each other. Humans use this time-related information and various information for their interaction. Currently developed robots do not implement these functions and can not actively cope with various situations occurring in real environment.

In the present invention, as described above, in order to solve the problems occurring in the information processing process of the existing robot which can not actively cope with various situations of the actual surrounding environment, it is necessary to use not only the various sensory information inputted to the robot but also the social and environmental context This paper proposes a technique that can more smoothly perform user - robot interaction by inferring the current situation based on the comprehensive interaction knowledge including the context model including the knowledge information about the robot and the long - term memory stored in the robot.

The system for user-robot interaction according to an exemplary embodiment includes an instance of information constituting an ontology-based context model including knowledge information on a social and environmental context, A comprehensive knowledge management unit including sensory information received by a sensor installed; A robot behavior determining unit that transmits a query for recognizing and inferring a current environment and a situation to a situation inference unit and determines a next behavior of the robot based on the received inference; And a controller for recognizing and inferring the current environment and the current state of the robot on the basis of the comprehensive knowledge acquired from the general knowledge management unit in response to the query received from the robot behavior determination unit, And a situation inferring unit for transmitting to the determining unit.

The system for user-robot interaction according to an exemplary embodiment of the present invention stores information related to an event occurring during user-robot interaction or predefined information about the user and the environment, The information includes at least one of a response of the user to the surrounding environment or a reaction of the user when the robot performs the determined action, and a long-term memory including information about the results of the reasoning based on the responses, And may further include a storage unit.

In one embodiment, the long-term memory storage unit stores information on an event occurring repeatedly over a threshold number of times during a critical period among events occurring during the user-robot interaction as long-term memory information, Information about the incident can be deleted.

In one embodiment, the sensory information received by the sensor includes sensory information including at least one of a voice signal, a video signal, a bio-signal of a user and a surrounding environment, a current position and a location of a user and a robot, Information can be received.

In one embodiment, knowledge of the social and environmental context of the context model may be updated periodically via a wired or wireless network.

In one embodiment, the context inferencing unit may search for a similar situation and a corresponding action through a wired or wireless network when the context inferencing unit is in a surrounding environment and a situation that does not exist in the integrated knowledge management unit as a result of reasoning about the received query, , The robot behavior determining unit may cause the robot to display a message requesting the user to input behavior information.

According to an embodiment of the present invention, there is provided an information processing method for user-robot interaction, comprising: receiving sensory information by a sensor installed in the robot and a sensor installed in a surrounding environment; Transmitting a query for recognizing and inferring the current surrounding environment and situation; Based on the query, the instance information from the ontology-based context model including the knowledge about the social and environmental context corresponding to the transmitted query, the information predefined for the user and the surrounding environment or the past user- Recognizing and inferring the current environment and the current state of the robot on the basis of the comprehensive knowledge including the information about the event occurring during the operation and the sensory information received from the sensor; And determining the next action of the robot based on the information about the speculation result.

In one embodiment, the information about the events that occurred during the past user-robot interaction may include inference based on at least one of the user's response to the environment or the user's response when the robot has determined the action, And may include information about a result.

In one embodiment, the step of receiving the sensory information includes sensing information including at least one of a voice signal, a video signal, a bio-signal of a user and a surrounding environment, a current position and a location of a user and a robot, Lt; / RTI >

In one embodiment, knowledge of the social and environmental context of the context model may be updated periodically via a wired or wireless network.

In one embodiment, in the step of recognizing and inferring the surrounding environment and the situation, when the received query is a query on the surrounding environment and the situation that does not exist in the comprehensive knowledge, similar situations and corresponding actions are performed through the wired or wireless network Retrieving the message, or causing the robot to display a message requesting the user to input behavior information.

According to embodiments of the present invention, not only the robot can respond to an event by reasoning based on social and environmental context and pre-input information, but also by storing the user's reaction and instruction to the event, If similar events occur, more active and active responses are possible. In addition, according to another embodiment, the context model can be updated periodically to flexibly cope with changing social and environmental contexts over time, and to search for an unstored event by itself or by requesting a new input, Can form a knowledge structure for interaction. Accordingly, the robot according to the embodiment of the present invention can differentiate the robot from the robot that repeats only the conventional simple operation, thereby realizing a more human-like interaction.

1 is a block diagram illustrating a system for user-robot interaction in accordance with one embodiment.
2 is a block diagram illustrating the configuration of the context model of FIG. 1 and an update over a wired or wireless network in accordance with an embodiment.
FIG. 3 is a flowchart showing each step of an information processing method for user-robot interaction according to an embodiment.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

1 is a block diagram illustrating a system for user-robot interaction in accordance with one embodiment.

In one embodiment, the composite knowledge management unit 10 of FIG. 1 includes an instance of information constituting the context model 20 and sensory information 30 received by the robot and sensors installed in the environment. The comprehensive knowledge management unit 10 stores instances of information constituting the context model 20, and the instances are based on the recognized results. As will be described later, there is a situation inference unit 70 for reasoning the current state of the robot logically on the basis of input data. The situation inference unit 70 receives from the robot behavior determination unit 50 various conditions Inference is performed based on the query to get the conclusion.

2 illustrates the configuration of the context model 20 and an update over a wired or wireless network in accordance with one embodiment. In the ontology-based context model 20, words related to a particular event are represented hierarchically and additionally include inference rules that can be extended. In one embodiment, such a context model may include social context (e.g., interaction type, social role, relationship, location, etc.), environmental context (e.g., device state, location, location, etc.) Behavioral style (e.g., personal information, emotional state, personality, gestures). Using the context model, the robot can recognize and infer the surrounding environment and environment from the contextual reasoning unit 70, which will be described later. In one embodiment, the context model 20 may be an ontology-based context model.

For example, if two or more people from a group like a user are visiting a home of a user in the evening, the robot may have a place called a 'home', a time such as' evening time ', and personal information associated with a user named' Gather, and extend it according to inference rules. In the case of "groups of people at home in the evening," social and environmental contexts can be used to deduce that there is a group of people at the user's home, and the next action can be determined.

As another example, if a user returns home at about 2 pm in mid-August, the robot collects time information, such as 'mid-August' and 'around 2 pm', and then, according to the social and environmental context, Can be inferred as having a high thirst, " and thus can make decisions such as handing the drink to the user.

In one embodiment, in order to determine the surroundings and the situation of the robot, it is necessary to detect various sensory information (for example, speech recognition device, video camera, etc.) installed on the robot, 30) (for example, a voice signal, a video signal, a bio-signal, a current position and a location of a user and a robot, a weather or a temperature of the surrounding environment) are transmitted to the comprehensive knowledge management unit 10, (For example, the current position of the user and the robot displayed on the map, the circumference indicated by the icon), and the type of information that the user can recognize The weather of the environment, or the current temperature expressed in numbers, etc.). In one embodiment, sensory information received by sensors installed in the environment may be transmitted to the robot via a wired or wireless network. In some embodiments, some of the received sensory information (location, weather, temperature, etc.) may be displayed on a display (not shown) mounted on the robot, or a display such as a smartphone, laptop, It is possible to transmit and display the electronic device.

In one embodiment, knowledge of the social and environmental context of the context model 20 may be periodically updated via a wired or wireless network. This is because social / environmental context changes due to changes in social environment and new language over time, so that these changes are reflected in user-robot interactions to make more adaptive and active behavior decisions . In one embodiment, the update over the network may be based on a search engine or a cloud service.

The long-term storage 90 of FIG. 1 stores information about the user and pre-defined information about the environment and information about events that occur during user-robot interaction.

In one embodiment, the user may enter into the long-term storage 90 the user's name, physical information, family and friendship, work or school, mealtime, bedtime and weather time, The user can preliminarily input and store the personal information such as the medicine type being used. As a result, the robot can be based on such personal information when recognizing and reasoning about the situation and determining the robot behavior. For example, a message may be displayed on a display (not shown) mounted on the robot to invite intake of the postprandial drug based on the pre-entered mealtime and the type of medication being administered, And present the road or traffic situation to the workplace or school based on the recognized current location. In addition, after the user's sleeping time, the user enters a power saving state, and when the sleeping time is reached, the user is awakened and alarmed. In one embodiment, as described above, the information or alarm can be directly displayed on a display in a robot or transmitted to an electronic device equipped with a display such as a smart phone, a laptop, or a desktop of a user via a wired or wireless network.

Various events occur during the user-robot interaction according to an embodiment. The data to be noticed among these events are stored in a long-term memory, and the interaction between the robot and the user , Which can be used in contextual reasoning to enable more interactive interaction with the user's state.

In one embodiment, the long-term memory storage unit 90 may store a user's reaction to a surrounding environment and a situation under the event when a specific event occurs. For example, if a user goes home at about 2 pm in the middle of August and drinks cool ice water and turns on the cooling system, he or she can save the behavior of the user as a reaction of the user to the event. If the user returns home at one afternoon of the summer, the robot can recommend the cool water based on the user's behavior information stored in the long-term memory storage unit 90 and determine the next action, such as operating the cooling apparatus .

In another embodiment, the long-term memory storage unit 90 may store the user's reaction when the robot performs a specific action when a specific event occurs. For example, in the middle of August, when a user who has returned home around 2 pm approves cool ice and refuses to drink and drinks a cool beer, the user's actions can be saved as a user response to the event. If the user returns home at one summer afternoon, the robot may decide to perform a recommendation for a cool beer instead of a cool ice based on the user's behavior information stored in the long-term memory storage unit 90. [

In one embodiment, the long-term memory storage 90 may be configured to store a threshold value for a critical period of events occurring during the user-robot interaction, to effectively manage the memory in the robot and to reduce the time required to retrieve and compute stored information. Information about an event occurring repeatedly over the number of times may be maintained as long-term memory information, and information about an event repeated less than the threshold number of times during the critical period may be deleted. For example, if a group meeting is not repeated for one year after the group meeting at the user's home, it may be recognized as a less important event and the information about the event may be deleted from the memory.

The robot behavior determination unit 50 and the situation reasoning unit 70 interact with the general knowledge management unit 10 to substantially infer the situation and determine the behavior of the robot. In one embodiment, the robot behavior determination unit 50 transmits a query for recognizing and inferring the current robot environment and the current situation to the situation reasoning unit 70, and then sends the inferred fact information from the situation reasoning unit 70 And can determine the next action of the robot. In one embodiment, the robot behavior determination unit 50 accesses the general knowledge management unit 10 to query the surrounding environment and the situation based on the sensory information 30 stored in the general knowledge management unit 10 And transmit it to the situation inferring unit 70.

The contextual reasoning unit 70 receives the query and accesses the general knowledge management unit 10 to acquire the sensed information 30 received by the sensor installed in the robot and the surrounding environment, And inferences the surrounding environment and the situation on the basis of the instance of the information constituting the context model (20). In this reasoning process, the context of the context model (eg, interaction type, social role, relationship, location, etc.), environmental context (eg, device state, location, Use inference rules between general / personal behavioral styles (eg, personal information, emotional state, personality, gestures).

Then, the situation inferring unit 70 transmits the inferred fact information to the robot behavior determining unit 50, and the robot behavior determining unit 50 can determine the next behavior of the robot based on the received inference.

For example, as in the above example, it can be deduced that 'there is a group meeting at the user's home' in the case of 'the group of the clubs at home in the evening', as shown in the above example, Information about past community meetings can be loaded and displayed on the display to remind the user of his or her memory.

In another example, it can be deduced from the time information of 'mid-August' and 'around 2 pm' that the user may be thirsty according to the social and environmental context, The controller 90 loads the information about the favorite beverage by the user and recommends the beverage or operates the air conditioner to lower the room temperature.

As a result of inferring the query received by the context reasoning unit 70 in the embodiment, if the surrounding environment and the situation are not present in the general knowledge management unit 10, similar situations and corresponding actions are retrieved through the network . That is, the surrounding environment and the situation that the robot recognizes through the sensory information about the specific event can not be configured as an instance in the context model 20, or are not included in the information previously input by the user, In case of an event, information on the event can be retrieved through a wired or wireless network using a communication device (not shown) included in the robot. If there is a social and environmental context for the same or similar event, the search result may be downloaded to the long term storage 90 to be used for the determination of the event, inference, and behavior.

In another embodiment, if the context inferring unit 70 inferences the query received by the contextual reasoning unit 70, the robot behavior determining unit may cause the robot to perform an action It is possible to display a message requesting information input. That is, when an event that does not exist in the context model 20 or the long term memory storage unit 90 occurs, the behavior of the robot for the event or the robot can recognize the surrounding environment and the situation using the stored information in the robot A message requesting additional input may be displayed on a display in a robot or transmitted to an electronic device equipped with a display such as a user's smart phone, laptop, or desktop through a wired or wireless network and displayed. Likewise, such input and behavior of the robot can be stored in the long term storage 90 and used for the same or similar events occurring next.

FIG. 3 illustrates each step of an information processing method for user-robot interaction according to an embodiment.

In one embodiment, the robot is configured to detect sensory information (e.g., voice signals, video signals, bio-signals, current position and location of the user and robot, weather of the surrounding environment, Temperature, etc.) (S100). In one embodiment, sensory information received by sensors installed in the environment may be transmitted to the robot via a wired or wireless network.

Then, the processor (for example, the robot behavior determining unit 50) that determines the behavior of the robot determines whether or not the current state of the environment and the current state of the robot is determined based on a reasoning processor (for example, 70) (S200).

Next, in response to the transmitted query, information on the instance from the context model 20 including knowledge about social and environmental context, information predefined for the user and the surrounding environment, As described above with respect to the sensory information received from the sensor, that is, the description of FIG. 1, information on an event that has occurred and information that can be used as general knowledge or a form recognizable by the user The present location of the user and the robot displayed on the screen, the weather of the surrounding environment indicated by the icon, or the current temperature indicated by the numerical value, etc.) of the current robot can be recognized and inferred based on the comprehensive knowledge (S300).

Then, the inference processor may transmit information about the inference result to the robot behavior determination processor, and the robot behavior determination processor may determine the next behavior of the robot based on the received inference (S400).

In one embodiment, the predefined information about the user and the surrounding environment in step S300 may include the user's name, physical information, family and friendship, work or school, mealtime, And the personal information such as the weather time, the contact of the other person that the user has, the medical history, or the type of medication being administered. As a result, the robot recognizes and inferences of the situation and the personal information .

In one embodiment, the information about the event occurring during the past user-robot interaction in step S300 may store the user's reaction to the surrounding environment and the situation under the event when a specific event occurs. In another embodiment, the robot can store a user's reaction to a specific event when a specific event occurs.

The steps (S100 to S400) of the information processing method for the user-robot interaction according to the embodiment of FIG. 3 are the same as the description of FIGS. 1 and 2, , A system for storing only information of events repeated over a threshold number of times in a critical period and deleting other information), a periodic updating process through a network of a context model, or information on incidents not stored in the robot, Can be applied in a similar manner.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments or constructions, It will be understood that the invention can be variously modified and changed without departing from the spirit and scope of the invention.

Through the above-described user-robot interaction system and information processing method, the robot according to the embodiments of the present invention can be more actively used by the user than the robot performing only the conventional simple operation and the repetitive operation by the pre- Can interact. Robots in various fields such as home, medical, industrial, or recreational robots can be more actively utilized by having a more adaptive system for various environments and situations.

10: General Knowledge Management Department
20: Context model
30: Sensory information
50: robot behavior decision unit
70: Situation Reasoning Department
90: long-term storage unit

Claims (11)

A system for user-robot interaction,
An integrated knowledge management unit including an instance of information constituting a context model including knowledge information on a social and environmental context and sensory information received by a sensor installed in the robot and a surrounding environment;
A robot behavior determining unit that transmits a query for recognizing and inferring a current surrounding environment and a situation to a situation inference unit and determines a next behavior of the robot based on information about an inference result received from the situation inference unit; And
In response to the query received from the robotic behavior determination unit, recognizes and inferences the current robot environment and situation based on the comprehensive knowledge acquired from the comprehensive knowledge management unit, And a situation inferencing section for transmitting the status information to the user terminal.
The method according to claim 1,
Predefined information about the user and the surrounding environment, or
And a long-term memory storage unit for storing information on an event occurring during the user-robot interaction,
Wherein the information on the event includes at least one of a response of the user to the surrounding environment or a response of the user when the robot performs the determined action, and information on the result of reasoning in the contextual reasoning unit based on the responses A system for user-robot interaction, comprising:
3. The apparatus of claim 2, wherein the long-
Information about an event occurring repeatedly more than a threshold number of times during a critical period among events occurring during the user-robot interaction as long-term memory information,
Wherein the information about the repeated events less than the threshold number of times during the critical period is deleted.
The method according to claim 1, wherein the sensory information received by the sensor
Wherein the system receives sensory information including at least one of a voice signal, a video signal, a bio-signal of a user and a surrounding environment, a current position and a location of a user and a robot, and a weather or a temperature of the surrounding environment.
The method according to claim 1,
Wherein the knowledge of the social and environmental context of the context model is periodically updated via a wired or wireless network.
The method according to claim 1,
As a result of inferring the query received by the contextual reasoning unit, if it is the surrounding environment and the situation that does not exist in the integrated knowledge management unit,
The situation inferencing unit may search for similar situations and counter-actions through a wired or wireless network,
Wherein the robot behavior determination unit causes the robot to display a message requesting the user to input behavior information.
An information processing method for user-robot interaction,
Receiving sensory information by a sensor installed in the robot and a sensor installed in a surrounding environment;
Transmitting a query for recognizing and inferring the current surrounding environment and situation;
Information corresponding to the transmitted query, instance information from a context model including knowledge about social and environmental context, information predefined for the user and the surrounding environment, or information generated during the past user-robot interaction Recognizing and inferring the current environment and situation of the robot based on the comprehensive knowledge including the information about the event and the sensory information received from the sensor; And
And determining a next behavior of the robot based on the result of the inference.
8. The method of claim 7,
The information about an event that occurred during the past user-
And information about a result of inferring based on the responses based on at least one of a response of the user to the surrounding environment or a reaction of the user when the robot performs the determined behavior, Way.
8. The method of claim 7, wherein receiving sensory information comprises:
Information processing for user-robot interaction, which receives sensory information including at least one of a user's and surroundings voice signals, a video signal, a bio-signal, a current position and location of a user and a robot, Way.
8. The method of claim 7,
Wherein the knowledge of the social and environmental context of the context model is periodically updated via a wired or wireless network.
8. The method of claim 7,
In a step of recognizing and inferring a surrounding environment and a situation, when the received query is a query for a surrounding environment and a situation not existing in the comprehensive knowledge,
Retrieving similar situations and responses through a wired or wireless network, or
Further comprising the step of causing the robot to prompt the user to input a behavior information.
KR1020160028824A 2016-03-10 2016-03-10 A system for user-robot interaction, and information processing method for the same KR101842963B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160028824A KR101842963B1 (en) 2016-03-10 2016-03-10 A system for user-robot interaction, and information processing method for the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160028824A KR101842963B1 (en) 2016-03-10 2016-03-10 A system for user-robot interaction, and information processing method for the same

Publications (2)

Publication Number Publication Date
KR20170105768A true KR20170105768A (en) 2017-09-20
KR101842963B1 KR101842963B1 (en) 2018-03-29

Family

ID=60033718

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160028824A KR101842963B1 (en) 2016-03-10 2016-03-10 A system for user-robot interaction, and information processing method for the same

Country Status (1)

Country Link
KR (1) KR101842963B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020050891A1 (en) * 2018-09-06 2020-03-12 Misty Robotics, Inc. Robot memory management techniques
WO2020130734A1 (en) * 2018-12-21 2020-06-25 삼성전자 주식회사 Electronic device for providing reaction on basis of user state and operating method therefor
CN111949773A (en) * 2019-05-17 2020-11-17 华为技术有限公司 Reading equipment, server and data processing method
CN114227717A (en) * 2021-12-31 2022-03-25 深圳市优必选科技股份有限公司 Intelligent inspection method, device, equipment and storage medium based on inspection robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101013384B1 (en) * 2008-12-29 2011-02-14 한양대학교 산학협력단 Knowledge information system for service of intelligent robot

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020050891A1 (en) * 2018-09-06 2020-03-12 Misty Robotics, Inc. Robot memory management techniques
WO2020130734A1 (en) * 2018-12-21 2020-06-25 삼성전자 주식회사 Electronic device for providing reaction on basis of user state and operating method therefor
CN111949773A (en) * 2019-05-17 2020-11-17 华为技术有限公司 Reading equipment, server and data processing method
CN114227717A (en) * 2021-12-31 2022-03-25 深圳市优必选科技股份有限公司 Intelligent inspection method, device, equipment and storage medium based on inspection robot

Also Published As

Publication number Publication date
KR101842963B1 (en) 2018-03-29

Similar Documents

Publication Publication Date Title
US11607182B2 (en) Voice controlled assistance for monitoring adverse events of a user and/or coordinating emergency actions such as caregiver communication
US10992491B2 (en) Smart home automation systems and methods
KR101842963B1 (en) A system for user-robot interaction, and information processing method for the same
US9501745B2 (en) Method, system and device for inferring a mobile user's current context and proactively providing assistance
US11876925B2 (en) Electronic device and method for controlling the electronic device to provide output information of event based on context
US20140052680A1 (en) Method, System and Device for Inferring a Mobile User's Current Context and Proactively Providing Assistance
US20060004680A1 (en) Contextual responses based on automated learning techniques
CN111512617B (en) Device and method for recommending contact information
US11483172B2 (en) Integrated control method and system for home appliance using artificial intelligence
CN110720100A (en) Information processing apparatus, information processing method, and program
US20230237059A1 (en) Managing engagement methods of a digital assistant while communicating with a user of the digital assistant
Leake et al. Cases, context, and comfort: Opportunities for case-based reasoning in smart homes
Aminikhanghahi et al. Thyme: Improving smartphone prompt timing through activity awareness
US11907822B2 (en) Controlling conversational digital assistant interactivity
US20210004702A1 (en) System and method for generating information for interaction with a user
KR20180046124A (en) System, method and program for analyzing user trait
US20210216815A1 (en) Electronic apparatus and operating method thereof
US20220051073A1 (en) Integrated Assistance Platform
KR20190109653A (en) Apparatus and method for generating and managing knowledge for service robot based on situation
US20220385767A1 (en) Targeted visitor notifications
US20200410317A1 (en) System and method for adjusting presentation features of a social robot
US20210350797A1 (en) System and method for providing voice assistance service
Obo et al. Lifelog visualization for elderly health care in informationally structured space
WO2020168454A1 (en) Behavior recommendation method and apparatus, storage medium, and electronic device
JP2023059602A (en) Program, information processing method, and information processing apparatus

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant