WO2013114493A1 - コミュニケーション引き込みシステム、コミュニケーション引き込み方法およびコミュニケーション引き込みプログラム - Google Patents
コミュニケーション引き込みシステム、コミュニケーション引き込み方法およびコミュニケーション引き込みプログラム Download PDFInfo
- Publication number
- WO2013114493A1 WO2013114493A1 PCT/JP2012/007325 JP2012007325W WO2013114493A1 WO 2013114493 A1 WO2013114493 A1 WO 2013114493A1 JP 2012007325 W JP2012007325 W JP 2012007325W WO 2013114493 A1 WO2013114493 A1 WO 2013114493A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- pull
- subject
- communication
- person
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S901/00—Robots
- Y10S901/01—Mobile robot
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S901/00—Robots
- Y10S901/46—Sensing device
- Y10S901/47—Optical
Definitions
- the present invention relates to a communication pull-in system used for a robot that communicates with a person.
- Patent Document 1 discloses a technology for improving the safety by allowing a person to visually recognize a danger range that fluctuates with a change in the operation content of a mobile robot. ing.
- An object of the present invention is to provide a communication pull-in system that can smoothly start communication between a robot and a person.
- a communication pull-in system is a communication pull-in system mounted on a robot that communicates with a target person, and a person specifying unit that specifies the position of the target person and a position of the specified target person
- a light source control unit for moving light, a pull-in control unit for instructing the robot to perform a pull-in operation for causing the subject to recognize the direction of the robot, and determining whether the subject has recognized the robot
- a person recognition specifying unit that, when it is determined that the subject has recognized the robot, the robot is instructed to start communication with the subject.
- a communication pull-in method is a communication pull-in method mounted on a robot that communicates with a target person, and specifies the position of the target person and moves light toward the specified position of the target person. , Instructing the robot to perform a pull-in operation for causing the subject to recognize the direction of the robot, determining whether the subject has recognized the robot, and that the subject has recognized the robot. If it is determined, the robot is instructed to start communication with the subject.
- the communication pull-in program is a communication pull-in program that causes a computer that performs communication pull-in processing to be performed by a robot that communicates with a target person, and is specified by the computer as a person specifying process that specifies the position of the target person.
- a light source control process for moving light toward the position of the subject
- a pull-in control process for instructing the robot to perform a pull-in operation for causing the subject to recognize the direction of the robot
- Human recognition specifying process for determining whether or not the target person has recognized, and processing for instructing the robot to start communication with the target person when it is determined that the target person has recognized the robot It is characterized by that.
- FIG. FIG. 1 is a schematic view showing an appearance of a robot using a first embodiment (Embodiment 1) of a communication pull-in system according to the present invention.
- the robot shown in FIG. 1 has an appearance reminiscent of performing communication, and includes a projection module 100, a head 101, a sensing module 102, and an arm 103.
- the projection module 100 has a light source and can irradiate light on a floor or a wall or project an image.
- the projection module 100 can adjust the projection angle by machine control and can project a specific place.
- the sensing module 102 specifies a target person (hereinafter, sometimes referred to as a person) to communicate with, identifies the location of the light source, and determines whether the target person has noticed the light emitted from the light source.
- the sensing module 102 can be realized by, for example, a camera or an infrared sensor.
- the head 101 and the arm 103 can be driven by a motor, a server mechanism, or the like.
- the head 101 can perform a motion of nodding to the subject.
- the arm 103 can perform a beckoning operation.
- FIG. 2 is a block diagram showing the configuration of the first embodiment of the communication pull-in system according to the present invention.
- the communication pull-in system according to the present embodiment includes a person specifying unit 200, a light source control unit 201, a pull-in control unit 203, and a person recognition specifying unit 204.
- the person identifying unit 200 detects the position of the person and the movement of the person using the image recognition of the camera or the like, and identifies the position of the target person for communication.
- the light source control unit 201 determines a path along which light emitted from the light source of the projection module 100 moves according to the specified position of the person, and controls the light source so that light is emitted along the path (for example, the light source control unit). By changing the emission direction, the light (incidentally, the light emission position) is moved.
- the pull-in control unit 203 operates to make the subject recognize the direction of the robot.
- the pull-in control unit 203 operates so that the head 101 of the robot is directed toward the subject and the light from the robot is recognized.
- the light may be moved in the direction of the robot. Also, the movement of the robot and the movement of light may be linked.
- the human recognition identification unit 204 determines whether the target person has recognized the robot. For example, when it is confirmed from the image captured by the camera that the subject turns his face in the direction of the robot or moves in the direction of the robot, it is determined that the subject has recognized the light. When it is determined that the subject has recognized the light, the communication pull-in system instructs the robot to start communication with the subject. When it is determined that the subject does not recognize the light, the operation of causing the pull-in control unit 203 to recognize the direction of the robot is performed again.
- the person identification unit 200, the light source control unit 201, the pull-in control unit 203, and the person recognition identification unit 204 included in the communication pull-in system according to the present embodiment can be realized by a CPU that executes processing based on a program.
- FIG. 3 is a block diagram showing the configuration of the pull-in control unit 203.
- the pull-in control unit 203 includes a pull-in state grasping unit 210, a pull-in operation selecting unit 220, a strategy DB (Data Base) 230, a pull-in operation control unit 240, and a robot operation DB (Data Base) 250.
- the robot pull-in operation stored in the strategy DB is the same motion as a person urges communication, and in order to make the target person recognize the robot, for example, according to the movement of light (movement of the irradiation position of light). It is an action of staring at the person or shaking hands.
- the pull-in state grasping unit 210 acquires the state such as how many times the robot has performed the pull-in operation, how the pull-in operation has been performed, and whether the subject has recognized the robot.
- the pull-in operation selection unit 220 uses a strategy DB 230 that stores a pull-in operation strategy in advance, and selects an operation to be performed by the robot based on the pull-in state grasped by the pull-in state grasping unit 210.
- the pull-in operation control unit 240 performs control for causing the robot to perform the operation of the robot selected by the pull-in operation selecting unit 220. Specifically, the pull-in operation control unit 240 acquires a light movement pattern, a machine control script of the robot, and the like from the robot operation DB 250, and causes the robot to execute the contents of the script.
- FIG. 4 is a flowchart showing the operation of the first embodiment of the communication pull-in system according to the present invention.
- the person identifying unit 200 grasps the position of the target person (step S001). Specifically, in order to specify the position of the person, the person specifying unit 200 performs image recognition using the sensing module 102 realized by, for example, a stereo camera, and determines the distance and angle between the target person and the robot. recognize. Further, as the sensing module 102, for example, sensing means other than image recognition, such as an ultrasonic or infrared distance sensor or a laser range sensor, may be used.
- the person identification unit 200 when a laser range sensor is used, the person identification unit 200 masks obstacles other than humans in advance, captures an object that moves by sensing at regular intervals, and detects the angle, distance, and moving direction from the robot. Know. Then, the person identifying unit 200 sets a target person who wants to establish communication, for example, as a person closest to the robot. In addition, for example, when targeting a child, the person identifying unit 200 determines and sets the child based on a sensing situation such as a height.
- the light source control unit 201 moves light in the direction of the set target person (step S002).
- the height of the projection module 100 shown in FIG. 1 is set in advance manually or by a distance sensor or the like. Since the person identifying unit 200 detects the distance to the robot, the light source control unit 201 calculates the projection angle of the projection module 100 based on the distance and the distance between the projection module 100 and the sensing module 102. The light source control unit 201 projects light from the projection module 100, operates the projection module 100 so as to obtain the calculated projection angle, and moves the light.
- the light source control unit 201 estimates the arrival point from the speed and distance of the movement, and projects the light toward the estimated place in consideration of the driving speed of the projection module 100.
- the projection module 100 may be realized by a light source having a narrow irradiation range such as a laser pointer, but may be a projector capable of projecting over a wide range.
- the pull-in control unit 203 gives an instruction to perform a pull-in operation to the robot in order to guide the target person to recognize the direction of the robot and communicate (step S003).
- the pull-in control unit 203 drives the machine of the robot at the same time that light arrives at the subject by the operation of step S002.
- the pull-in action to be performed by the robot is a motion similar to that for a person urging communication. For example, in accordance with the movement of light, the target person recognizes the robot by staring at him or shaking his / her hand. .
- a drawing means such as an LED (Light Emitting Diode) or a speaker may be mounted on the robot, and the drawing means may emit light or make a sound.
- the pull-in control unit 203 analyzes the pull-in state.
- the pull-in status grasping unit 210 acquires the status such as how many times the pull-in has been performed, what type of pull-in has been performed, and whether some kind of reaction has been seen in the human recognition specifying unit 204.
- the pull-in action selection unit 220 uses the strategy DB 230 that stores the pull-in action strategy in advance, and sets the action to be performed by the robot based on the pull-in situation grasped by the pull-in situation grasping part 210.
- the pull-in operation selection unit 220 selects a pull-in operation strategy according to the content, time zone, and the like when the robot selects an operation. For example, the pull-in operation selection unit 220 selects a strategy for performing an approach to the next person when the target person cannot be pulled in a time zone in which many people pass in front of the robot.
- FIG. 5 is an explanatory diagram illustrating an example of a pull-in operation strategy stored in the strategy DB 230.
- the strategy shown in FIG. 5 is an example in which the pull-in operation is changed depending on the number of pull-ins. Specifically, with respect to the operation of light, the pull-in operation control unit 240 moves light slowly when the number of pull-in is up to 3, and moves light violently when the number of pull-in is up to nine. When it reaches 10 times, give up. Further, with respect to the robot operation, the pull-in operation control unit 240 operates the head 101 to face the target person when the number of pull-in is up to three times, and moves the arm 103 when the number of pull-in is up to nine. When the number of pull-in reaches 10 times, the next target person is found.
- the pull-in operation control unit 240 may perform both the light operation and the robot operation, or may perform only one of them.
- the pull-in operation control unit 240 performs control for causing the robot to perform the operation of the robot selected by the pull-in operation selecting unit 220. Specifically, a light movement pattern, a robot machine control script, and the like are acquired from the operation database 250 and executed by the robot.
- the pull-in control unit 203 projects an arrow to indicate the direction of the robot, or gradually changes the color of the image.
- the pull-in control unit 203 may emit a word from the speaker and change the word.
- the pull-in control unit 203 may display information on the video by displaying a CG (Computer Graphics) character of the robot.
- the pull-in control unit 203 may perform display so that the robot's alternation is projected into the air and brought closer to the subject.
- options such as “change the color gradually” and “change the word” are set in the strategy DB 230 in advance.
- a script or the like for performing these operations is stored in the robot operation DB 250 in advance.
- the pull-in operation control unit 240 performs control for causing the robot to perform the operation of the robot selected by the pull-in operation selecting unit 220 based on the script stored in the robot operation DB 250.
- step S003 After the pull-in operation in step S003 is performed, the human recognition identifying unit 204 determines whether the target person has noticed the robot, that is, whether the communication has been successfully pulled (step S004). If it is determined that the person does not recognize the robot, the process of step S003 is performed again.
- step S004 The operation of the person recognition identifying unit 204 in step S004 will be specifically described.
- face orientation determination processing and face detection processing are performed to score whether the robot is facing.
- the sensing module 102 uses a device other than a camera, for example, a laser range sensor
- the human recognition identifying unit 204 scores whether the robot is facing the robot based on information such as the distance from the robot is shortened. .
- the person recognition specifying unit 204 determines that the pull-in operation is successful.
- the communication pull-in system instructs the robot to perform a successful operation such as nodding or beckoning the target person and start communication with the target person such as dialogue or information provision. If the score is low, it is determined that the person does not recognize the robot, and the process of step S003 is performed again.
- the communication pull-in system according to the present embodiment can simulate that the robot approaches a person using light, rather than physically approaching the person with whom the robot communicates. Therefore, according to the communication pull-in system of the present embodiment, communication can be started even when the distance between the robot and the subject is far away, when there is an obstacle and cannot be approached, or when the robot does not have moving means. Can be made smooth.
- the communication pull-in system of the present embodiment unlike the method of calling the target person only by voice, for example, when the target person is a partner whose name is unknown, when there is another sound source, If it is wide, the target person can be drawn even if it is not possible to specify where the voice comes from.
- FIG. 6 is a schematic view showing the appearance of a robot using the second embodiment (embodiment 2) of the communication pull-in system according to the present invention.
- the robot shown in FIG. 6 includes a communication module 105 in addition to the configuration of the robot shown in FIG. 1 and is connected to a PC (Personal Computer) 106 via a network 107.
- PC Personal Computer
- the communication module 105 is a communication interface that enables network connection.
- the network 107 is, for example, a wireless LAN.
- the type of PC 106 is not limited as long as it can be connected to a network.
- a mobile phone or a tablet may be used.
- the sensing module 102 is a camera, for example, and an image captured by the sensing module 102 is transmitted from the communication module 105 to the personal computer 106 via the network 107. The operator can remotely operate the robot machine while viewing the robot sensing data displayed on the personal computer 106.
- the operations performed by the person specifying unit 200, the light source control unit 201, the pull-in control unit 203, and the person recognition specifying unit 204 of the first embodiment are manually performed by the operator operating the PC 106. To do.
- an example of the operation of the present embodiment will be described with reference to the flowchart shown in FIG.
- step S001 When the operator wants to communicate with the person who passed in front of the robot, the position of the target person is grasped (step S001). Specifically, the operator sees the camera video displayed on the screen of the PC 106 and grasps the position of the subject person.
- the operator moves the light emitted from the projection module 100 to create an opportunity for communication with the target person (step S002). Specifically, the operator performs a point operation such as clicking on the screen where the subject is or where the subject is supposed to move in anticipation of movement. The light emitted from the projection module 100 moves relative to the target on the screen in accordance with a point operation such as clicking. At this time, the control angle of the projection module 100 is determined based on the orientation of the robot and the position of the target.
- the operator performs the operation performed by the pull-in control unit 203 in the first embodiment manually by controlling the light and machine using the PC 106 (step S003).
- step S004 when the operator looks at the video of the camera and determines that the subject turns around in the direction of the robot (YES in step S004), the operator starts communication. If it is determined that the subject is not turning around in the direction of the robot (NO in step S004), the pull-in operation in step S003 is performed again.
- step S001 may perform all of the operations from step S001 to step S004 manually, but some of the operations are automatically performed using the components shown in FIG. 2 as in the first embodiment. May be.
- the operation in step S003 may be performed by the pull-in control unit 203 based on a preset strategy, and another operation may be manually performed by the operator.
- the communication pull-in system of this embodiment since the operator can manually perform pull-in operations and the like, the degree of freedom of robot operation is increased. Therefore, the start of communication can be smoothly performed even in a situation where it is difficult to attract the target person by the control by the system.
- the person specifying unit for example, the person specifying unit 200
- a light source control unit for example, the light source control unit 201
- a pull-in control unit for example, the pull-in control unit 203
- a human recognition specifying unit for example, human recognition specifying unit 204 that determines whether or not the target person has recognized the robot.
- the robot communicates with the target Communication pull-in system characterized by instructing the start of
- the communication pull-in system may be configured such that the pull-in control unit instructs the output of sound or light as the pull-in operation.
- the communication pull-in system may be configured such that the pull-in control unit instructs the operation of the robot itself as the pull-in operation.
- the present invention is applied to advertisements, information providing systems, or telepresence robots.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
Description
図1は、本発明によるコミュニケーション引き込みシステムの第1の実施形態(実施形態1)を用いたロボットの外観を示す概略図である。図1に示すロボットは、コミュニケーションを行うことを想起させる外観を有し、投影モジュール100、頭部101、センシングモジュール102、腕103を備える。
本発明によるコミュニケーション引き込みシステムの他の実施形態として、操作者による遠隔通信を利用した例を説明する。以下の説明において、第1の実施形態と同様の構成および動作は、説明を省略する。図6は、本発明によるコミュニケーション引き込みシステムの第2の実施形態(実施形態2)を用いたロボットの外観を示す概略図である。図6に示すロボットは、図1に示すロボットが備える構成の他に、通信モジュール105を備え、ネットワーク107を介してPC(Personal Computer)106と接続されている。
101 頭部
102 センシングモジュール
103 腕
105 通信モジュール
106 パソコン
107 ネットワーク
200 人特定部
201 光源制御部
203 制御部
204 人認知特定部
210 状況把握部
220 動作選定部
240 動作制御部
250 動作データベース
230 戦略DB
250 ロボット動作DB
Claims (5)
- 対象者とコミュニケーションをとるロボットに搭載されるコミュニケーション引き込みシステムであって、
前記対象者の位置を特定する人特定部と、
特定された前記対象者の位置に向けて光を移動させる光源制御部と、
前記対象者に前記ロボットの方向を認識させるための引き込み動作の実行を、前記ロボットに指示する引き込み制御部と、
前記ロボットを前記対象者が認知したかどうか判断する人認知特定部とを備え、
前記ロボットを前記対象者が認知したと判断された場合、前記ロボットに前記対象者とのコミュニケーションの開始を指示する
ことを特徴とするコミュニケーション引き込みシステム。 - 引き込み制御部は、
引き込み動作として、音または光の出力を指示する
請求項1記載のコミュニケーション引き込みシステム。 - 引き込み制御部は、
引き込み動作として、ロボット自体の動作を指示する
請求項1または請求項2記載のコミュニケーション引き込みシステム。 - 対象者とコミュニケーションをとるロボットに搭載されるコミュニケーション引き込み方法であって、
前記対象者の位置を特定し、
特定された前記対象者の位置に向けて光を移動させ、
前記対象者に前記ロボットの方向を認識させるための引き込み動作の実行を、前記ロボットに指示し、
前記ロボットを前記対象者が認知したかどうか判断し、
前記ロボットを前記対象者が認知したと判断された場合、前記ロボットに前記対象者とのコミュニケーションの開始を指示する
ことを特徴とするコミュニケーション引き込み方法。 - コンピュータに、
対象者とコミュニケーションをとるロボットにコミュニケーション引き込み処理を実行させるコミュニケーション引き込みプログラムであって、
コンピュータに、
前記対象者の位置を特定する人特定処理と、
特定された前記対象者の位置に向けて光を移動させる光源制御処理と、
前記対象者に前記ロボットの方向を認識させるための引き込み動作の実行を、前記ロボットに指示する引き込み制御処理と、
前記ロボットを前記対象者が認知したかどうか判断する人認知特定処理と、
前記ロボットを前記対象者が認知したと判断された場合、前記ロボットに前記対象者とのコミュニケーションの開始を指示する処理と
を実行させるためのコミュニケーション引き込みプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12867118.7A EP2810748A4 (en) | 2012-02-03 | 2012-11-15 | COMMUNICATION COMMITMENT SYSTEM, COMMUNICATION COMMITMENT METHOD, AND COMMUNICATION COMMITMENT PROGRAM |
CN201280068866.5A CN104093527A (zh) | 2012-02-03 | 2012-11-15 | 通信引入系统、通信引入方法和通信引入程序 |
JP2013556049A JP5987842B2 (ja) | 2012-02-03 | 2012-11-15 | コミュニケーション引き込みシステム、コミュニケーション引き込み方法およびコミュニケーション引き込みプログラム |
AU2012368731A AU2012368731A1 (en) | 2012-02-03 | 2012-11-15 | Communication draw-in system, communication draw-in method, and communication draw-in program |
US14/373,684 US9662788B2 (en) | 2012-02-03 | 2012-11-15 | Communication draw-in system, communication draw-in method, and communication draw-in program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012022106 | 2012-02-03 | ||
JP2012-022106 | 2012-09-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013114493A1 true WO2013114493A1 (ja) | 2013-08-08 |
Family
ID=48904583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/007325 WO2013114493A1 (ja) | 2012-02-03 | 2012-11-15 | コミュニケーション引き込みシステム、コミュニケーション引き込み方法およびコミュニケーション引き込みプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US9662788B2 (ja) |
EP (1) | EP2810748A4 (ja) |
JP (1) | JP5987842B2 (ja) |
CN (1) | CN104093527A (ja) |
AU (1) | AU2012368731A1 (ja) |
WO (1) | WO2013114493A1 (ja) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9956687B2 (en) | 2013-03-04 | 2018-05-01 | Microsoft Technology Licensing, Llc | Adapting robot behavior based upon human-robot interaction |
US9056396B1 (en) * | 2013-03-05 | 2015-06-16 | Autofuss | Programming of a robotic arm using a motion capture system |
DE102013215409A1 (de) * | 2013-08-06 | 2015-02-12 | Robert Bosch Gmbh | Projektionseinheit für eine selbsttätig mobile Plattform, Transportroboter und Verfahren zum Betrieb einer selbsttätig mobilen Plattform |
US20180009118A1 (en) * | 2015-02-17 | 2018-01-11 | Nec Corporation | Robot control device, robot, robot control method, and program recording medium |
US10427305B2 (en) * | 2016-07-21 | 2019-10-01 | Autodesk, Inc. | Robotic camera control via motion capture |
JP6565853B2 (ja) * | 2016-09-29 | 2019-08-28 | トヨタ自動車株式会社 | コミュニケーション装置 |
US10467509B2 (en) | 2017-02-14 | 2019-11-05 | Microsoft Technology Licensing, Llc | Computationally-efficient human-identifying smart assistant computer |
US11010601B2 (en) * | 2017-02-14 | 2021-05-18 | Microsoft Technology Licensing, Llc | Intelligent assistant device communicating non-verbal cues |
US11100384B2 (en) | 2017-02-14 | 2021-08-24 | Microsoft Technology Licensing, Llc | Intelligent device user interactions |
US10963493B1 (en) * | 2017-04-06 | 2021-03-30 | AIBrain Corporation | Interactive game with robot system |
US10839017B2 (en) | 2017-04-06 | 2020-11-17 | AIBrain Corporation | Adaptive, interactive, and cognitive reasoner of an autonomous robotic system utilizing an advanced memory graph structure |
US11151992B2 (en) | 2017-04-06 | 2021-10-19 | AIBrain Corporation | Context aware interactive robot |
US10810371B2 (en) | 2017-04-06 | 2020-10-20 | AIBrain Corporation | Adaptive, interactive, and cognitive reasoner of an autonomous robotic system |
US10929759B2 (en) | 2017-04-06 | 2021-02-23 | AIBrain Corporation | Intelligent robot software platform |
JP6572943B2 (ja) * | 2017-06-23 | 2019-09-11 | カシオ計算機株式会社 | ロボット、ロボットの制御方法及びプログラム |
JP6800183B2 (ja) * | 2018-07-13 | 2020-12-16 | 本田技研工業株式会社 | コミュニケーション装置 |
CN109159130A (zh) * | 2018-09-03 | 2019-01-08 | 北京云迹科技有限公司 | 用于机器人的移动位置提示方法及装置、机器人 |
CN110245628B (zh) * | 2019-06-19 | 2023-04-18 | 成都世纪光合作用科技有限公司 | 一种检测人员讨论场景的方法和装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002350555A (ja) * | 2001-05-28 | 2002-12-04 | Yamaha Motor Co Ltd | 人検出装置 |
JP2003326479A (ja) * | 2003-05-26 | 2003-11-18 | Nec Corp | 自律行動ロボット |
JP2009045692A (ja) * | 2007-08-20 | 2009-03-05 | Saitama Univ | コミュニケーションロボットとその動作方法 |
JP2009123045A (ja) | 2007-11-16 | 2009-06-04 | Toyota Motor Corp | 移動ロボット及び移動ロボットの危険範囲の表示方法 |
JP2010082714A (ja) * | 2008-09-30 | 2010-04-15 | Mitsubishi Heavy Ind Ltd | コミュニケーションロボット |
JP2011227237A (ja) * | 2010-04-19 | 2011-11-10 | Honda Motor Co Ltd | コミュニケーションロボット |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60036599T2 (de) | 1999-05-10 | 2008-02-07 | Sony Corp. | Robotervorrichtung, steuerverfahren und aufzeichnungsmedium |
JP2001224867A (ja) | 1999-05-13 | 2001-08-21 | Masanobu Kujirada | 擬似ペット |
JP4651645B2 (ja) * | 2001-09-13 | 2011-03-16 | シャープ株式会社 | 群ロボットシステム |
JP4026758B2 (ja) | 2002-10-04 | 2007-12-26 | 富士通株式会社 | ロボット |
JP4822319B2 (ja) | 2005-10-27 | 2011-11-24 | 株式会社国際電気通信基礎技術研究所 | コミュニケーションロボットおよびそれを用いた注意制御システム |
JP2009113190A (ja) * | 2007-11-09 | 2009-05-28 | Toyota Motor Corp | 自律動作型ロボットおよび自律動作型ロボットの動作制御方法 |
JP4560078B2 (ja) | 2007-12-06 | 2010-10-13 | 本田技研工業株式会社 | コミュニケーションロボット |
JP2011000656A (ja) | 2009-06-17 | 2011-01-06 | Advanced Telecommunication Research Institute International | 案内ロボット |
TWM374628U (en) * | 2009-10-05 | 2010-02-21 | Wen-Chun Chen | Control device capable of executing different actions by determining human body approaching distance |
KR20120054845A (ko) * | 2010-11-22 | 2012-05-31 | 삼성전자주식회사 | 로봇의 음성인식방법 |
US20120274646A1 (en) * | 2011-04-29 | 2012-11-01 | Randy Johnson | Laser particle projection system |
US20130138499A1 (en) * | 2011-11-30 | 2013-05-30 | General Electric Company | Usage measurent techniques and systems for interactive advertising |
WO2013176760A1 (en) * | 2012-05-22 | 2013-11-28 | Intouch Technologies, Inc. | Graphical user interfaces including touchpad driving interfaces for telemedicine devices |
US8996167B2 (en) * | 2012-06-21 | 2015-03-31 | Rethink Robotics, Inc. | User interfaces for robot training |
-
2012
- 2012-11-15 CN CN201280068866.5A patent/CN104093527A/zh active Pending
- 2012-11-15 WO PCT/JP2012/007325 patent/WO2013114493A1/ja active Application Filing
- 2012-11-15 JP JP2013556049A patent/JP5987842B2/ja not_active Expired - Fee Related
- 2012-11-15 AU AU2012368731A patent/AU2012368731A1/en not_active Abandoned
- 2012-11-15 US US14/373,684 patent/US9662788B2/en active Active
- 2012-11-15 EP EP12867118.7A patent/EP2810748A4/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002350555A (ja) * | 2001-05-28 | 2002-12-04 | Yamaha Motor Co Ltd | 人検出装置 |
JP2003326479A (ja) * | 2003-05-26 | 2003-11-18 | Nec Corp | 自律行動ロボット |
JP2009045692A (ja) * | 2007-08-20 | 2009-03-05 | Saitama Univ | コミュニケーションロボットとその動作方法 |
JP2009123045A (ja) | 2007-11-16 | 2009-06-04 | Toyota Motor Corp | 移動ロボット及び移動ロボットの危険範囲の表示方法 |
JP2010082714A (ja) * | 2008-09-30 | 2010-04-15 | Mitsubishi Heavy Ind Ltd | コミュニケーションロボット |
JP2011227237A (ja) * | 2010-04-19 | 2011-11-10 | Honda Motor Co Ltd | コミュニケーションロボット |
Non-Patent Citations (1)
Title |
---|
See also references of EP2810748A4 |
Also Published As
Publication number | Publication date |
---|---|
US20150032254A1 (en) | 2015-01-29 |
EP2810748A1 (en) | 2014-12-10 |
CN104093527A (zh) | 2014-10-08 |
EP2810748A4 (en) | 2016-09-07 |
JP5987842B2 (ja) | 2016-09-07 |
US9662788B2 (en) | 2017-05-30 |
JPWO2013114493A1 (ja) | 2015-05-11 |
AU2012368731A1 (en) | 2014-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5987842B2 (ja) | コミュニケーション引き込みシステム、コミュニケーション引き込み方法およびコミュニケーション引き込みプログラム | |
EP3178617B1 (en) | Hybrid reality based i-bot navigation and control | |
US11491661B2 (en) | Communication robot and control program of communication robot | |
US9579795B2 (en) | Robot device, method of controlling the same, and program for controlling the same | |
JP5318623B2 (ja) | 遠隔操作装置および遠隔操作プログラム | |
TWI694904B (zh) | 機器人語音操控系統及方法 | |
JP4849244B2 (ja) | 移動ロボットおよび移動速度推定方法 | |
US20060217837A1 (en) | Robot device, movement method of robot device, and program | |
JP2003340764A (ja) | 案内ロボット | |
US20180217671A1 (en) | Remote control apparatus, remote control method, remote control system, and program | |
WO2021220679A1 (ja) | ロボット制御装置、方法、及びプログラム | |
JP2024023193A (ja) | 情報処理装置及び情報処理方法 | |
JP2015066624A (ja) | ロボット制御システム、ロボット制御プログラムおよび説明ロボット | |
KR20190104103A (ko) | 애플리케이션 구동 방법 및 장치 | |
JP2024009862A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2017170568A (ja) | サービス提供ロボットシステム | |
JP7439826B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20200401139A1 (en) | Flying vehicle and method of controlling flying vehicle | |
KR102301763B1 (ko) | 이동로봇을 제어하기 위한 시스템 및 방법 | |
WO2020166373A1 (ja) | 情報処理装置および情報処理方法 | |
JP2021064299A (ja) | 制御システム、端末装置、制御方法及びコンピュータプログラム | |
WO2020213244A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20180164895A1 (en) | Remote control apparatus, remote control method, remote control system, and program | |
KR101439249B1 (ko) | 공간 점유 정보를 이용한 로봇 동작 생성 장치 및 방법 | |
KR20240086905A (ko) | 특정인을 추적하는 로봇시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12867118 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013556049 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2012867118 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012867118 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14373684 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2012368731 Country of ref document: AU Date of ref document: 20121115 Kind code of ref document: A |