WO2020129993A1 - Autonomous robot - Google Patents

Autonomous robot Download PDF

Info

Publication number
WO2020129993A1
WO2020129993A1 PCT/JP2019/049463 JP2019049463W WO2020129993A1 WO 2020129993 A1 WO2020129993 A1 WO 2020129993A1 JP 2019049463 W JP2019049463 W JP 2019049463W WO 2020129993 A1 WO2020129993 A1 WO 2020129993A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
user
motion
watching
baby
Prior art date
Application number
PCT/JP2019/049463
Other languages
French (fr)
Japanese (ja)
Inventor
要 林
秀哉 南地
司 堀ノ内
直紀 沼口
克則 藁谷
Original Assignee
Groove X株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Groove X株式会社 filed Critical Groove X株式会社
Priority to JP2020561466A priority Critical patent/JPWO2020129993A1/en
Publication of WO2020129993A1 publication Critical patent/WO2020129993A1/en
Priority to JP2023202284A priority patent/JP2024055866A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop

Definitions

  • the present invention relates to a robot that autonomously selects an action according to an internal state or an external environment.
  • the present invention is an invention completed based on the above recognition of the problems, and a first object thereof is to provide a technology for providing various senses of security to a user in living with a robot.
  • a second purpose is to provide a technique for expressing the affection of the robot user for the user.
  • a third object is to provide a technique for allowing a plurality of robots to act in a coordinated manner.
  • An autonomous action robot includes a motion control unit that selects a motion of a robot, a drive mechanism that executes the motion selected by the motion control unit, and whether or not a target person satisfies a predetermined watching condition. And a communication unit that transmits a captured image of the target person to a predetermined communication terminal in the watching mode, when the watching condition is satisfied, a mode setting unit that sets the watching mode of the target person. ..
  • the robot 100 shares a daily life with the user, sometimes considers the user, sometimes struggles to be useful to the user, and actively seeks the affection of the user, so that the robot 100 has a presence as a member of the family. Demonstrate.
  • the basic configuration of the robot 100 will be described with reference to FIGS. 1 to 4, and then various action scenes of the robot 100 will be described.
  • FIG. 1 is a diagram showing an appearance of the robot 100.
  • 1A is a front view and FIG. 1B is a side view.
  • the robot 100 is an autonomous action type robot that determines an action based on an external environment and an internal state.
  • the external environment is recognized by various sensors such as a camera and a thermo sensor 115.
  • the internal state is quantified as various parameters expressing the emotion of the robot 100.
  • the robot 100 sets the indoor area of the owner's home as an action range.
  • a person involved in the robot 100 is called a “user”. Of the users, the owner or administrator of the robot 100 is called the “owner”.
  • the body 104 of the robot 100 has a rounded shape as a whole, and includes an outer skin 314 formed of a soft and elastic material such as urethane, rubber, resin, or fiber.
  • the robot 100 may be dressed.
  • the total weight of the robot 100 is about 5 to 15 kilograms, and the height is about 0.5 to 1.2 meters. Due to various attributes such as appropriate weight, roundness, softness, and good feel, the effect that the user can easily hold the robot 100 and that he/she wants to hold the robot 100 is realized.
  • the robot 100 includes a pair of front wheels 102 (left wheel 102a, right wheel 102b) and one rear wheel 103.
  • the front wheels 102 are driving wheels and the rear wheels 103 are driven wheels.
  • the front wheels 102 do not have a steering mechanism, but the rotation speed and rotation direction of the left and right wheels can be individually controlled.
  • the rear wheel 103 is a caster and is rotatable to move the robot 100 back and forth and left and right.
  • the rear wheel 103 may be an omni wheel.
  • the front wheel 102 and the rear wheel 103 can be completely housed in the body 104 by the drive mechanism (rotating mechanism, link mechanism).
  • a pair of left and right covers 312 is provided on the lower half of the body 104.
  • the cover 312 is made of a flexible and elastic resin material (rubber, silicone rubber, or the like), constitutes a soft body, and can accommodate the front wheel 102.
  • the cover 312 is formed with a slit 313 (opening) that opens from the side surface to the front surface, and the front wheel 102 can be advanced through the slit 313 and exposed to the outside.
  • the robot 100 cannot move. That is, the body 104 descends and sits on the floor F as the wheels are retracted. In this seated state, a flat seating surface 108 (ground contact bottom surface) formed on the bottom of the body 104 contacts the floor surface F.
  • the robot 100 has two arms 106. Although there is a hand at the tip of the arm 106, it does not have a function of grasping an object.
  • the arm 106 can perform simple operations such as raising, bending, waving, and vibrating by driving an actuator described later.
  • the two arms 106 can be individually controlled.
  • a face area 116 is exposed in front of the head of the robot 100.
  • the face area 116 is provided with two eyes 110.
  • the eye 110 is a device capable of displaying an image with a liquid crystal element or an organic EL element, and expressing a line of sight or a facial expression by moving a pupil or an eyelid displayed as an image.
  • a nose 109 is provided in the center of the face area 116.
  • the nose 109 is provided with an analog stick, and can detect the pushing direction in addition to all the directions of up, down, left and right.
  • the robot 100 is provided with a plurality of touch sensors, and a user's touch can be detected on almost the entire area of the robot 100, such as the head, torso, buttocks, and arms.
  • the robot 100 is equipped with various sensors such as a microphone array and an ultrasonic sensor capable of specifying the sound source direction. It also has a built-in speaker and can emit simple voice.
  • a horn 112 is attached to the head of the robot 100.
  • a omnidirectional camera 113 is attached to the horn 112 so that the entire upper portion of the robot 100 can be imaged at one time.
  • the horn 112 also has a built-in thermo sensor 115 (thermo camera).
  • the horn 112 is provided with a plurality of modules (not shown) for performing communication using infrared rays, and these modules are annularly installed toward the surroundings. Therefore, the robot 100 can perform infrared communication while recognizing the direction.
  • the horn 112 is provided with a switch for emergency stop, and the user can perform an emergency stop of the robot 100 by pulling out the horn 112.
  • FIG. 2 is a sectional view schematically showing the structure of the robot 100.
  • the body 104 includes a main body frame 310, a pair of arms 106, a pair of covers 312, and an outer cover 314.
  • the body frame 310 includes a head frame 316 and a body frame 318.
  • the head frame 316 has a hollow hemispherical shape and forms the head skeleton of the robot 100.
  • the body frame 318 has a rectangular tube shape and forms a body skeleton of the robot 100.
  • the lower end of the body frame 318 is fixed to the lower plate 334.
  • the head frame 316 is connected to the body frame 318 via the connection mechanism 330.
  • the body frame 318 constitutes the axis of the body 104.
  • the body frame 318 is configured by fixing a pair of left and right side plates 336 to the lower plate 334, and supports the pair of arms 106 and the internal mechanism.
  • the battery 118, the control circuit 342, various actuators, and the like are housed inside the body frame 318.
  • the bottom surface of the lower plate 334 forms the seating surface 108.
  • the body frame 318 has an upper plate 332 on its upper part.
  • a cylindrical support portion 319 having a bottom is fixed to the upper plate 332.
  • the upper plate 332, the lower plate 334, the pair of side plates 336, and the support portion 319 form a body frame 318.
  • the outer diameter of the support portion 319 is smaller than the distance between the left and right side plates 336.
  • the pair of arms 106 is integrally assembled with the annular member 340 to form an arm unit 350.
  • the annular member 340 has an annular shape, and the pair of arms 106 are attached so as to radially separate the center line thereof.
  • the annular member 340 is coaxially inserted into the support portion 319 and placed on the upper end surfaces of the pair of side plates 336.
  • the arm unit 350 is supported by the body frame 318 from below.
  • the head frame 316 has a yaw axis 321, a pitch axis 322, and a roll axis 323.
  • the head frame 316 swings around the yaw axis 321 to achieve a swinging motion, and the swinging around the pitch shaft 322 achieves a nod motion, a look-up motion and a look-down motion around the roll shaft 323.
  • the operation of tilting the neck to the left and right is realized by the rotation (rolling).
  • the position and angle of each axis in the three-dimensional space may change according to the driving mode of the connection mechanism 330.
  • the connection mechanism 330 includes a link mechanism and is driven by a plurality of motors installed on the body frame 318.
  • the body frame 318 houses the wheel drive mechanism 370.
  • the wheel drive mechanism 370 includes a front wheel drive mechanism and a rear wheel drive mechanism that move the front wheel 102 and the rear wheel 103 into and out of the body 104, respectively.
  • the front wheels 102 and the rear wheels 103 function as a “moving mechanism” that moves the robot 100.
  • the front wheel 102 has a direct drive motor in the center thereof. Therefore, the left wheel 102a and the right wheel 102b can be driven individually.
  • the front wheel 102 is rotatably supported by the wheel cover 105, and the wheel cover 105 is rotatably supported by the body frame 318.
  • the pair of covers 312 is provided so as to cover the body frame 318 from the left and right, and has a smooth curved surface shape so that the outline of the body 104 is rounded.
  • a closed space is formed between the body frame 318 and the cover 312, and the closed space is a storage space S for the front wheels 102.
  • the rear wheel 103 is housed in a housing space provided in the lower rear part of the body frame 318.
  • the outer skin 314 covers the body frame 310 and the pair of arms 106 from the outside.
  • the outer cover 314 has a thickness that allows a person to feel elasticity, and is formed of a stretchable material such as urethane sponge. As a result, when the user holds the robot 100, he or she can feel appropriate softness and take a natural skinship like a human being makes a pet.
  • the outer cover 314 is attached to the main body frame 310 in such a manner that the cover 312 is exposed.
  • An opening 390 is provided at the upper end of the outer cover 314. The opening 390 is inserted through the horn 112.
  • a touch sensor is arranged between the body frame 310 and the outer cover 314.
  • a touch sensor is embedded in the cover 312.
  • Each of these touch sensors is a capacitance sensor and detects a touch in almost the entire area of the robot 100.
  • the touch sensor may be embedded in the outer cover 314 or may be provided inside the main body frame 310.
  • the arm 106 has a first joint 352 and a second joint 354, and an arm 356 between both joints and a hand 358 at the tip of the second joint 354.
  • the first joint 352 corresponds to the shoulder joint
  • the second joint 354 corresponds to the wrist joint.
  • a motor is provided in each joint to drive the arm 356 and the hand 358, respectively.
  • the drive mechanism for driving the arm 106 includes these motors and their drive circuit 344.
  • FIG. 3 is a hardware configuration diagram of the robot 100.
  • the robot 100 includes an internal sensor 128, a communication device 126, a storage device 124, a processor 122, a drive mechanism 120, and a battery 118.
  • the drive mechanism 120 includes the connection mechanism 330 and the wheel drive mechanism 370 described above.
  • the processor 122 and the storage device 124 are included in the control circuit 342.
  • Each unit is connected to each other by a power supply line 130 and a signal line 132.
  • the battery 118 supplies power to each unit via the power supply line 130.
  • Each unit sends and receives a control signal via a signal line 132.
  • the battery 118 is a lithium-ion secondary battery and is a power source of the robot 100.
  • the internal sensor 128 is an assembly of various sensors built in the robot 100. Specifically, it is a camera, a microphone array, a distance measuring sensor (infrared sensor), a thermo sensor 115, a touch sensor, an acceleration sensor, an atmospheric pressure sensor, an odor sensor, or the like.
  • the touch sensor corresponds to most of the area of the body 104 and detects a user's touch based on a change in capacitance.
  • the odor sensor is a known sensor that applies the principle that electric resistance changes due to adsorption of molecules that are the origin of odor.
  • the communication device 126 is a communication module that performs wireless communication with various external devices.
  • the storage device 124 includes a non-volatile memory and a volatile memory, and stores a computer program and various setting information.
  • the processor 122 is a means for executing a computer program.
  • the drive mechanism 120 includes a plurality of actuators. In addition to this, a display and speakers are also installed.
  • the drive mechanism 120 mainly controls the wheels and the head.
  • the drive mechanism 120 can change the moving direction and moving speed of the robot 100, and can also move the wheels up and down. When the wheels are lifted, the wheels are completely stored in the body 104, and the robot 100 comes into contact with the floor surface F at the seating surface 108 and becomes seated.
  • the drive mechanism 120 also controls the arm 106.
  • FIG. 4 is a functional block diagram of the robot system 300.
  • the robot system 300 includes a robot 100, a server 200, and a plurality of external sensors 114.
  • Each constituent element of the robot 100 and the server 200 includes a computing unit such as a CPU (Central Processing Unit) and various coprocessors, a storage device such as a memory and a storage, hardware including a wired or wireless communication line connecting them, and a storage unit. It is realized by software stored in the device and supplying a processing instruction to the arithmetic unit.
  • the computer program may be configured by a device driver, an operating system, various application programs located in their upper layers, and a library that provides common functions to these programs. Each block described below is not a hardware-based configuration but a function-based block.
  • Some of the functions of the robot 100 may be realized by the server 200, and some or all of the functions of the server 200 may be realized by the robot 100.
  • a plurality of external sensors 114 are installed in advance in the house.
  • the server 200 manages the external sensor 114 and provides the detection value acquired by the external sensor 114 to the robot 100 as needed.
  • the robot 100 determines a basic action based on the information obtained from the internal sensor 128 and the plurality of external sensors 114.
  • the external sensor 114 is for reinforcing the sensory organs of the robot 100, and the server 200 is for reinforcing the processing capacity of the robot 100.
  • the communication device 126 of the robot 100 may periodically communicate with the server 200, and the server 200 may be responsible for the process of identifying the position of the robot 100 by the external sensor 114 (see also Patent Document 2).
  • the server 200 includes a communication unit 204, a data processing unit 202 and a data storage unit 206.
  • the communication unit 204 is in charge of communication processing with the external sensor 114 and the robot 100.
  • the data storage unit 206 stores various data.
  • the data processing unit 202 executes various processes based on the data acquired by the communication unit 204 and the data stored in the data storage unit 206.
  • the data processing unit 202 also functions as an interface for the communication unit 204 and the data storage unit 206.
  • the data storage unit 206 includes a motion storage unit 232 and a personal data storage unit 218.
  • the robot 100 has a plurality of motion patterns (motions). Various motions such as swaying the arm 106, approaching the owner while meandering, and staring at the owner with his/her neck bent are defined.
  • the motion storage unit 232 stores a “motion file” that defines the motion control content. Each motion is identified by a motion ID. The motion file is also downloaded to the motion storage unit 160 of the robot 100. Which motion is executed may be determined by the server 200 or the robot 100.
  • Most of the motions of the robot 100 are configured as compound motions including a plurality of unit motions.
  • the robot 100 may be expressed as a combination of a unit motion of turning toward the owner, a unit motion of approaching while raising a hand, a unit motion of approaching while shaking the body, and a unit motion of sitting while raising both hands. ..
  • Such a combination of four motions realizes a motion of “approaching the owner, raising his hand on the way, and finally shaking his body to sit down”.
  • the rotation angle, angular velocity, etc. of the actuator provided in the robot 100 are defined in association with the time axis. According to the motion file (actuator control information), various motions are expressed by controlling each actuator over time.
  • the transition time when changing from the previous unit motion to the next unit motion is called “interval".
  • the interval may be defined according to the time required to change the unit motion and the content of the motion.
  • the length of the interval is adjustable.
  • action characteristics the settings relating to the behavior control of the robot 100, such as when and which motion to select, output adjustment of each actuator in realizing the motion, are collectively referred to as “action characteristics”.
  • the behavior characteristic of the robot 100 is defined by a motion selection algorithm, a motion selection probability, a motion file, and the like.
  • the motion storage unit 232 stores, in addition to motion files, a motion selection table that defines motions to be executed when various events occur.
  • a motion selection table that defines motions to be executed when various events occur.
  • one or more motions and their selection probabilities are associated with an event.
  • the personal data storage unit 218 stores user information. Specifically, master information indicating intimacy with the user and physical/behavioral characteristics of the user is stored. Other attribute information such as age and gender may be stored.
  • the robot 100 has an internal parameter called intimacy for each user.
  • intimacy for each user.
  • the robot 100 recognizes an action that is favorable to itself, such as hugging itself or calling out, the intimacy with the user increases.
  • the degree of intimacy with a user who is not related to the robot 100, a user who works violently, or a user who rarely encounters is low.
  • the data processing unit 202 includes a position management unit 208, a recognition unit 212, an operation control unit 222, an intimacy management unit 220, and a state management unit 244.
  • the position management unit 208 specifies the position coordinates of the robot 100.
  • the state management unit 244 manages various internal parameters such as the charging rate, the internal temperature, and various physical states such as the processing load of the processor 122.
  • the state management unit 244 manages various emotion parameters indicating the emotion (loneliness, curiosity, desire for approval, etc.) of the robot 100. These emotional parameters are always fluctuating.
  • the movement target point of the robot 100 changes according to the emotion parameter. For example, when loneliness is increasing, the robot 100 sets the place where the user is as the movement target point.
  • -Emotional parameters change over time.
  • various emotion parameters also change due to a response action described later.
  • the emotional parameter indicating loneliness decreases when the owner "hugs”, and the emotional parameter indicating loneliness gradually increases when the owner is not visually recognized for a long time.
  • the recognition unit 212 recognizes the external environment.
  • the recognition of the external environment includes various recognitions such as recognition of weather and season based on temperature and humidity, and recognition of shade (safety zone) based on light intensity and temperature.
  • the recognition unit 156 of the robot 100 acquires various environmental information by the internal sensor 128, performs primary processing on the environmental information, and transfers the environmental information to the recognition unit 212 of the server 200.
  • the recognition unit 156 of the robot 100 extracts a moving object, in particular, an image region corresponding to a person or an animal from the image, and a feature indicating a physical feature or a behavioral feature of the moving object from the extracted image region.
  • a "feature vector" is extracted as a set of quantities.
  • the feature vector component (feature amount) is a numerical value that quantifies various physical and behavioral features. For example, the width of the human eye is digitized in the range of 0 to 1 to form one feature vector component.
  • the method of extracting the feature vector from the captured image of a person is an application of a known face recognition technique.
  • the robot 100 transmits the feature vector to the server 200.
  • the recognition unit 212 of the server 200 is imaged by comparing the feature vector extracted from the image captured by the built-in camera of the robot 100 with the feature vector of the user (cluster) registered in the personal data storage unit 218 in advance. It is determined which person the user corresponds to (user identification process). Further, the recognition unit 212 estimates the emotion of the user by recognizing the facial expression of the user as an image. The recognition unit 212 also performs user identification processing on moving objects other than people, such as cats and dogs that are pets.
  • the recognition unit 212 recognizes various kinds of response actions performed on the robot 100 and classifies them into pleasant and unpleasant actions.
  • the recognition unit 212 also recognizes the owner's response to the action of the robot 100, and classifies the action into an affirmative/negative response.
  • the pleasant/unpleasant behavior is determined depending on whether the user's behavior is pleasant or uncomfortable as a living thing. For example, hugging is a pleasant act for the robot 100, and being kicked is an unpleasant act for the robot 100.
  • the affirmative/negative reaction is determined by whether the user's response action indicates the user's pleasant or unpleasant feeling. Hugging is an affirmative reaction indicating the user's pleasant feeling, and being kicked is a negative reaction indicating the user's unpleasant feeling.
  • the operation control unit 222 of the server 200 cooperates with the operation control unit 150 of the robot 100 to determine the motion of the robot 100.
  • the operation control unit 222 of the server 200 creates a movement target point of the robot 100 and a movement route therefor.
  • the operation control unit 222 may create a plurality of movement routes and select any one of the movement routes.
  • the motion control unit 222 selects a motion of the robot 100 from a plurality of motions in the motion storage unit 232.
  • a selection probability is associated with each motion for each situation. For example, a selection method is defined in which when the owner makes a pleasant action, the motion A is executed with a probability of 20%, and when the temperature is 30 degrees or more, the motion B is executed with a probability of 5%. ..
  • the intimacy degree management unit 220 manages the intimacy degree for each user. As described above, the degree of intimacy is registered in the personal data storage unit 218 as a part of personal data. When the pleasant behavior is detected, the intimacy degree management unit 220 increases the intimacy degree with respect to the owner. The intimacy decreases when an offensive behavior is detected. Moreover, the intimacy of owners who have not been visually recognized for a long period of time gradually decreases.
  • the robot 100 includes a communication unit 142, a data processing unit 136, a data storage unit 148, an internal sensor 128, and a drive mechanism 120.
  • the communication unit 142 corresponds to the communication device 126 (see FIG. 3) and is in charge of communication processing with the external sensor 114, the server 200, and the other robot 100.
  • the data storage unit 148 stores various data.
  • the data storage unit 148 corresponds to the storage device 124 (see FIG. 3).
  • the data processing unit 136 executes various processes based on the data acquired by the communication unit 142 and the data stored in the data storage unit 148.
  • the data processing unit 136 corresponds to the processor 122 and a computer program executed by the processor 122.
  • the data processing unit 136 also functions as an interface for the communication unit 142, the internal sensor 128, the drive mechanism 120, and the data storage unit 148.
  • the data storage unit 148 includes a motion storage unit 160 that defines various motions of the robot 100.
  • Various motion files are downloaded from the motion storage unit 232 of the server 200 to the motion storage unit 160 of the robot 100.
  • the motion is identified by the motion ID.
  • the operation timing, operation time, operation direction, etc. of various actuators are It is defined in time series in the motion file.
  • Various data may be downloaded from the personal data storage unit 218 to the data storage unit 148.
  • the data processing unit 136 includes a recognition unit 156 and an operation control unit 150.
  • the operation control unit 150 of the robot 100 cooperates with the operation control unit 222 of the server 200 to determine the motion of the robot 100.
  • the server 200 may determine some motions, and the robot 100 may determine other motions. Although the robot 100 determines the motion, the server 200 may determine the motion when the processing load of the robot 100 is high.
  • the base motion may be determined in the server 200 and the additional motion may be determined in the robot 100. How the server 200 and the robot 100 share the motion determination process may be designed according to the specifications of the robot system 300.
  • the operation control unit 150 of the robot 100 instructs the drive mechanism 120 to execute the selected motion.
  • the drive mechanism 120 controls each actuator according to the motion file.
  • the motion control unit 150 can also perform a motion of lifting both arms 106 as a gesture of "hugging" when a user with high intimacy is nearby, and when the user gets tired of "hugging", the left and right front wheels 102 are moved. By alternately repeating the reverse rotation and the stop while the robot is housed, it is possible to express a motion to hate the hug.
  • the drive mechanism 120 drives the front wheel 102, the arm 106, and the neck (head frame 316) in accordance with an instruction from the operation control unit 150 to cause the robot 100 to express various motions.
  • the recognition unit 156 of the robot 100 interprets external information obtained from the internal sensor 128.
  • the recognition unit 156 can perform visual recognition (visual part), odor recognition (olfactory part), sound recognition (auditory part), and tactile recognition (tactile part).
  • the recognition unit 156 extracts the feature vector from the captured image of the moving object.
  • the feature vector is a set of parameters (feature amounts) indicating the physical features and behavioral features of the moving object.
  • feature amounts parameters indicating the physical features and behavioral features of the moving object.
  • the recognition unit 156 identifies the user from the feature vector based on a known technique described in Patent Document 2 or the like.
  • the recognition unit 156 of the robot 100 selects or extracts information necessary for recognition, and interpretation processing such as determination is executed by the recognition unit 212 of the server 200. ..
  • the recognition processing may be performed only by the recognition unit 212 of the server 200, or may be performed only by the recognition unit 156 of the robot 100. As described above, both sides perform the above-described recognition processing while sharing roles. Good.
  • the recognition unit 156 recognizes this by the touch sensor and the acceleration sensor, and the recognition unit 212 of the server 200 recognizes that “a violent act” is performed by a user in the vicinity.
  • the recognition unit 212 of the server 200 may recognize that a “calling action” has been performed on itself. Further, when a temperature around the body temperature is detected, it is recognized that the "contact action” has been performed by the user, and when an upward acceleration is detected in the state of contact recognition, it is recognized that the "hugging” has been performed.
  • Physical contact may be sensed when the user lifts the body 104, or the hug may be recognized when the load applied to the front wheel 102 is reduced.
  • the robot 100 acquires the action of the user as physical information by the internal sensor 128, and the recognition unit 212 of the server 200 determines the comfort/discomfort.
  • the recognition unit 212 of the server 200 also executes a user identification process based on the feature vector.
  • the recognition unit 212 of the server 200 recognizes various responses of the user to the robot 100. Some typical response actions among various response actions are associated with pleasantness or discomfort, affirmation or denial. In general, most of the pleasant behaviors are affirmative reactions, and most of the offensive behaviors are negative responses. Pleasure/discomfort is related to intimacy, and positive/negative reactions influence behavior selection of the robot 100.
  • the intimacy degree management unit 220 of the server 200 changes the intimacy degree with respect to the user according to the response action recognized by the recognition unit 156.
  • the degree of intimacy with respect to the user who has performed a pleasant act increases, and the degree of intimacy with respect to the user who has performed an unpleasant act decreases.
  • Each function of the server 200 is realized by loading a program for realizing the function into the memory and instantiating (instantiating) the program.
  • Various processing by the robot 100 is supplemented by the processing capacity of the server 200.
  • the server 200 can be used as a resource of the robot 100. How to use the resources of the server 200 is dynamically determined according to a request from the robot 100. For example, in the robot 100, when it is necessary to continuously generate complex motions according to detection values from a large number of touch sensors, the processing of the processor 122 in the robot 100 is preferentially assigned to motion selection/generation,
  • the recognition unit 212 of the server 200 may perform the processing for image recognition of the surrounding situation. In this way, various processes of the robot system 300 can be distributed between the robot 100 and the server 200.
  • a single server 200 can also control multiple robots 100.
  • each function of the server 200 is materialized independently for each robot 100.
  • the server 200 may prepare the recognition unit 212 for the robot 100B separately from the recognition unit 212 (instance object) for the robot 100A.
  • the robot 100 of the present embodiment acquires a large number of captured images (still images) by periodically capturing an image of the surroundings with the omnidirectional camera 113.
  • the robot 100 forms a memory (hereinafter referred to as “image memory”) based on the captured image.
  • Image memory is a collection of multiple keyframes.
  • the key frame is distribution information of feature points (feature amount) in the captured image.
  • the robot 100 of the present embodiment uses a graph-based SLAM (Simultaneous Localization and Mapping) technology using image features, more specifically, a SLAM technique based on ORB (Oriented FAST and Rotated BRIEF) features to form keyframes. It is formed (see Patent Document 3).
  • SLAM Simultaneous Localization and Mapping
  • the robot 100 periodically forms a key frame while moving to form an aggregate of key frames, in other words, an image memory as an image feature distribution.
  • the robot 100 estimates the current point by comparing the key frame acquired at the current point with a large number of key frames already held. That is, the robot 100 performs “spatial recognition” by comparing the captured image that is actually visually recognized with the captured image (memory) that is visually recognized once, and matching the present situation with the past memory.
  • the image memory formed as a set of feature points is a so-called map.
  • the robot 100 updates the map while moving while estimating the current position.
  • the basic configuration of the robot 100 is premised on recognizing the position by the external sensor 114 instead of the key frame.
  • the robot 100 of the present embodiment will be described as recognizing a place based only on a key frame.
  • the robot 100 in this embodiment includes a “mode setting unit” for setting various modes.
  • ⁇ Baby watching> 5 to 7 are schematic diagrams for explaining action scenes when a plurality of robots 100 watch a baby.
  • a baby hereinafter referred to as the “subject” who is the target of watching in the bouncer is sleeping (Fig. 5A).
  • the two robots 100 recognize the presence and position of a baby (infant) through image recognition.
  • the two robots 100 may share the baby's position through mutual communication.
  • the two robots 100 determine the gazing point of each robot from the position of the baby. Through the mutual communication, the two robots 100 correct the gazing points so that the gazing points of the two robots are the same or within a predetermined range.
  • the two robots 100 control the movements and parts so that the heads are directed to the respective determined gaze points, and thus an operation in which the two robots 100 look into the baby is realized (FIG. 5B). ).
  • One of the two robots 100, 100A keeps watching over the baby by directing the head of the robot 100A toward the baby's presence direction according to the position of the baby.
  • the robot 100A keeps a constant distance from the baby so as not to get near the baby.
  • the other robot 100B divides the role through communication with the robot 100A, and after a while, performs an operation in which watching is stopped and a play is started (FIG. 5C).
  • the mother is cooking in the kitchen, leaving the robots 100 to watch over the baby (Fig. 5D).
  • Mom has a smartphone (communication terminal) in the kitchen (Fig. 6A).
  • the robot 100A (during watching) captures an image of the baby with the omnidirectional camera 113 and continues to send the captured image to the smartphone as a live image.
  • the smartphone On a smartphone, the area in which the baby is reflected is displayed in the entire sky image. At this time, the image corrected so that the distortion of the whole sky image is displayed on a plane may be displayed on the smartphone.
  • the robot 100B executes an interference motion.
  • the robot 100B executes the interference motion when at least one of the condition that a voice such as a cry is collected and the condition that the baby is in a specific state such as crying by image analysis are satisfied.
  • the interfering motion is a motion for awakening such as outputting a predetermined sound that catches the baby's attention, outputting a sound like a lullaby, shaking the arm 106, shaking the head or body, and shaking the bouncer. is there. Distract the baby by the interference motion.
  • the robot A may perform the interference motion.
  • a live image relayed from the robot 100A shows the baby crying on his mother's smartphone (Fig. 6D).
  • the videos before and after the baby starts crying may be made permanent by being stored in the HDD or the like.
  • the mother was surprised to grab the smartphone (Fig. 7A).
  • the mother hurriedly went to the nursery and rushed to the baby (Fig. 7B).
  • the mother holds the baby (Fig. 7C).
  • the robots 100 are beside the mother and the baby and stare at them.
  • the two robots recognize the respective positions of the mother and baby by image analysis, and set the gazing point from the respective positions.
  • the two robots 100 share the gazing point via communication and correct the mutual gazing points so that they are within the same range or within a predetermined range as necessary.
  • the two robots 100 perform an action of directing their heads to the respective gazing points.
  • FIG. 7D When the robots 100 confirm that the baby fell asleep, they leave the bouncer (FIG. 7E).
  • the robot 100 may search for the baby in order to start the above-mentioned watching action when manually set by the mother to the “watching mode” via the input switch or the smartphone.
  • the mode setting unit may automatically set the watching mode.
  • the robot 100 may automatically shift to the watching mode when an infant is detected by image recognition and there is no guardian near the infant.
  • the case where there is no guardian near the baby may be, for example, a case where an image of a person of a predetermined age or more is not detected, or a case where an image of a person associated with the baby is not detected.
  • all the robots 100 may be set to the watching mode, or only some of the robots 100 may be set to the watching mode.
  • the robot 100 may shift to the watching mode as a watching condition that "baby" is detected in the captured image and no other user or a specific user such as a mother or father is detected.
  • the robot 100 being watched may limit the action range within a range in which the baby being watched can be visually recognized. Alternatively, the robot 100 may not leave the room with the baby while watching. At least, it is desirable that the robot 100 keep an eye on the baby (for example, always catch the baby in the angle of view of the camera) while watching.
  • the robot 100 pays attention to the state of the target person by using sensors that measure the external environment, such as the omnidirectional camera 113, a microphone, and a temperature sensor.
  • the robot 100 recognizes the position of the baby using a sensor for measuring the external environment such as the omnidirectional camera 113, a microphone, and a temperature sensor so that the robot 100 can turn itself toward the baby. It may be controlled to. Further, the robot 100 may reduce the number of places where the baby comes into contact with the baby, for example, by accommodating the wheels when the baby comes within a predetermined distance.
  • the robot 100 executes a motion for positively engaging with the baby when a predetermined warning condition is satisfied.
  • the “warning information” may be transmitted to another communication terminal such as a mother's smartphone.
  • the alert condition can be set arbitrarily, for example, when a baby (target person to be watched) crying, when approaching stairs, when trying to get out, when messing with a small object, when falling, etc. , There may be situations where the baby is at risk.
  • the motion for actively engaging may be the above-described interference motion.
  • the robot 100A may notify the robot 100B of the shift to the watching mode.
  • the robot 100B may also shift to the watching mode, or may approach the positions of the robot 100A and the baby.
  • the robot 100 suppresses the operation amount of the drive mechanism 120 (in particular, a mechanism relating to movement and posture adjustment) more than in the normal mode so that the baby does not wake up due to the operation sound of the robot 100, It operates quietly so as not to make operation noise.
  • Robot B may perform certain motions on the baby's side that may be of interest to the baby, such as running around a bouncer or dancing. Good. For example, it can be determined that the baby is awake if the condition that the baby's voice is detected by voice analysis, or the condition that the baby has detected an image with the eyes open by image analysis is satisfied. No such motion is performed when the baby is sleeping.
  • the robot B executes an appropriate motion according to the state of the watching target person. A baby is also considered to be at ease if many robots 100 gather around him.
  • the robot 100 may shift to the watching mode when it receives a specific voice command such as “Watch your baby” from a specific user such as a mother via a microphone or the like.
  • the robot 100 may perform a confirmation action such as circling around the baby, facing the baby, or pointing the hand 358 to the baby based on the position of the baby detected by image analysis or the like.
  • the mother instructor
  • the mother can determine whether or not the robot 100 is mistaken for the person being watched by the confirmation action. It is considered that the mother utters a positive reply such as "I left it to you” if correct, and a negative reply such as "No" if wrong. Therefore, the robot 100 may determine whether or not the target of watching is correct based on the reply of the mother collected by the microphone after the confirmation action.
  • the robot 100 While watching, the robot 100 keeps the line of sight of the robot 100 at the baby, such as looking into the baby. The baby is relieved to believe that he is being watched by the robot 100. By seeing the robot 100 healthily watching over the baby, the mother can provoke affection and trust for the robot 100. The watching action also has a meaning as an appeal to the user (witness) that "the robot 100 is working hard”.
  • the robot 100B may freely play while watching the robot 100A.
  • the action range of the robot 100B is limited to the range in which the baby or the robot 100A can be visually recognized.
  • the robot 100A may send a live image of the baby up (close-up image) to the smartphone, and the robot 100B may send the live image of the robot 100A watching the baby (wide-angle image) to the smartphone.
  • the robot 100B further limit the range of action so as not to interfere with the shooting of the baby by the robot 100A.
  • the robot 100B may identify the positions of the baby and the robot 100A, and act so as not to be covered on the straight line connecting the baby and the robot 100A.
  • the robot 100A may notify the robot 100B of a command to request movement.
  • the robot 100B moves according to the command.
  • the robot 100A When it is detected by image analysis or voice analysis that the baby is in a predetermined state, for example, when the baby begins to cry, the robot 100A notifies the baby that the state has changed in addition to the live image.
  • the robot 100B may perform an interference motion.
  • the robot 100A notifies the smartphone of a message that briefly indicates a state associated with a warning condition such as "the baby is crying" or "the baby has started to crawl".
  • the robot 100B may perform a dance as the interference motion, or may play music such as a lullaby with an internal audio player. In any case, the robot 100B soothes the baby by performing a distracting action on the baby.
  • the mother returns (see FIG. 7B), the robot 100B stops the interference motion.
  • the robot 100B stops the interference motion when "Mother” can be confirmed in the captured image.
  • the robot 100B stops the interfering motion when it recognizes, through a microphone, that the mother has pronounced a keyword (hereinafter referred to as “completion command”) that indicates completion of the watching action such as “thank you” or “it is okay”. May be At this time, it may be determined whether or not the completion command is uttered by the person who has instructed the robot 100A to perform the watching action, and if the instruction is from the person, the watching action may be completed.
  • the robot 100 recognizes the baby from the captured image.
  • the “baby” detection method may be realized by applying a known face recognition technology. Further, the robot 100 may adjust the moving direction so that the baby can always be visually recognized when the baby is moving regardless of whether the baby is watching or not.
  • the robot 100A and the robot 100B may take turns watching. For example, when the robot 100A performs watching for 10 minutes, the robot 100B may shift to the watching mode and the robot 100A may take a free action. It is considered that the baby is not bored when the plurality of robots 100 alternately watch. In addition, it becomes easy for a third party to have a feeling as if the robot 100A and the robot 100B are watching together.
  • the robot 100 watches over the baby, it will be easier for mothers who are busy raising children to concentrate on housework. If you do household chores such as washing away from your baby, you may not notice immediately when your baby is crying. A baby may start crying in earnest without realizing that he is crying. This situation puts a great burden on the mother. By the robot 100 watching over the baby, it is possible to quickly detect the sign of the baby crying, such as the baby's "chuckling".
  • ⁇ Answer> 8 to 11 are schematic diagrams for explaining an action scene when the robot 100 makes an answering machine.
  • a home in which two robots 100A and 100B and one female owner live.
  • the female owner goes to work (Fig. 8A).
  • the female owner calls out to the nearby robot 100A, "Keep me away.”
  • the robot 100A recognizes that the female owner is going out by listening to this word (collecting voice through the microphone), and shifts to the "answering mode" (FIG. 8B).
  • the female owner goes out from the front door (Fig. 8C).
  • the robot 100A shifts to the entrance and executes a predetermined motion to drop off the female owner.
  • both the robot 100A and the robot 100B may give up, or only the robot 100 whose intimacy with the female owner is a predetermined value or more may give up. ..
  • the front door opens during the answering machine (Fig. 9A).
  • the robot 100 recognizes that there is a front door by voice analysis or image analysis, the two robots 100 move to the front door in anticipation of the return of the female owner (FIG. 9B).
  • the person the person identified by the image analysis or the like
  • the robot 100 approaches the suspicious person sufficiently to photograph the suspicious person (FIG. 9D).
  • the robot 100 shifts to the alert mode. After shifting to the alert mode, the robots 100 may move to the vicinity of the suspicious individual and continue shooting the suspicious individual.
  • a female owner works in the office.
  • the robot 100 transmits the photographed image of the suspicious individual to the smartphone of the female owner (FIG. 10A).
  • a female owner who is working learns that the robot 100 has found a suspicious person with a smartphone (FIG. 10B).
  • FIG. 10B A female owner who is working learns that the robot 100 has found a suspicious person with a smartphone.
  • the robots 100 do not know the mother of the female owner.
  • the woman notifies the robots 100 from the smartphone that the person is not a suspicious person.
  • the female owner may inform the robot 100 that he or she is not a suspicious person, by saying, "I'm a mom, so don't worry.”
  • the robot 100 releases the alert mode.
  • the mother cooks home cooking ( Figure 10C).
  • the female owner (daughter) returns home and chats with her mother while eating home cooking (Fig. 11A).
  • the robots 100 are playing next to each other (Figs. 11B and 11C).
  • the robot 100 photographs the suspicious person and The captured image of the suspicious individual is transmitted to the (specific user) smartphone.
  • the female owner can leave the security of the absence home to the robot 100 with peace of mind.
  • the robot 100 is out of the office through image analysis and voice recognition, such as when an earthquake occurs, something is broken, a gas leak occurs, a customer (call from the intercom) such as a courier, etc., through image analysis and voice recognition. It detects various events and notifies the owner's smartphone of the content that informs the event. Further, the robot 100 may record an absence event as a life log, and the owner may confirm the absence event by checking the life log via the smartphone after returning home.
  • the mode setting unit of the robot 100 may set the answering mode based on the operation input from the user.
  • the robot 100 may automatically change the setting to the answering machine mode when detecting a specific event that the user goes out of the front door.
  • the robot 100 may automatically change the setting to the answering machine mode when the user cannot be visually recognized in the room for a certain time or longer.
  • the robot 100 may set the action range at a position where a suspicious person can be photographed. This is so that the behavior of a suspicious person is not overlooked. Moreover, in order to prevent violence from a suspicious person, you may act away from the suspicious person.
  • the robot 100 cancels the warning mode and returns to the answering machine mode. After that, the robot 100 may normally engage with the unidentified person (former/suspicious person). Further, the figure of an unidentified person may be stored and parameters such as intimacy may be managed. You may follow an unidentified person. In the case of the examples shown in FIGS. 8 to 11, although the mother is initially warned by the robot 100, after the female owner (daughter) gives the robot 100 an approval notice, the robot 100 starts to cling to the mother. .. The mother can feel welcomed after being turned over to the robot 100.
  • a user at a remote location may send an indoor confirmation instruction to the robot 100 via a smartphone.
  • the robot 100 patrolles the room according to the map generated based on the SLAM.
  • the robot 100 may transmit the captured image to the smartphone of the user, or may notify the presence/absence of an abnormal event.
  • the user can confirm the state of the home at any time by transmitting the indoor confirmation instruction.
  • ⁇ Remote operation> 12 to 14 are schematic diagrams for explaining an action scene when a user who is out and about operates the robot 100 remotely.
  • two robots 100A and 100B are in the absence.
  • the rest of the family leave the robots 100 and the cat and go out to the city (Fig. 12B).
  • the boy has a face that does not float (Fig. 12C).
  • the boy takes out the smartphone (mobile terminal).
  • the boy begins to manipulate the smartphone (Fig. 12D).
  • FIG. 13A Two robots 100 are playing in the absence house (Fig. 13A).
  • the robot 100A starts moving, the robot 100B follows the robot 100A (FIG. 13B).
  • the robot 100A and the robot 100B are chasing and playing.
  • the mother cares about the appearance of the boy (son) (Fig. 13C).
  • the boy goes out, he is concerned that the cat was not very well (Fig. 13D).
  • the boy sends a command from the smartphone to the robot 100 to "check the appearance of the cat".
  • the robot 100A and the robot 100B move to capture an image of a point or an object indicated by the command, and capture an image as appropriate. Images captured by the robot 100A and the robot 100B are sent to the smartphone.
  • Cats are playing well in the cat tree (Fig. 14A).
  • the family is reassured by the lively appearance of the cat (Fig. 14B).
  • the robot 100 shoots a cat playing on the cat tower.
  • the robot 100 photographs the cat up close, and the cat gazes at the robot 100 (FIGS. 14C and 14D).
  • the user can send various instructions from the smartphone to the robot 100.
  • the user can instruct the robot 100 to check the room.
  • the robot 100 detects an object corresponding to “cat” from the captured image and transmits the captured image centering on the cat to the smartphone.
  • Such an instruction may be a voice command or may be input from a graphical user interface included in the smartphone.
  • the user may operate the robot 100 like a radio control car (hereinafter, such an operation method is referred to as “remote operation”).
  • the image captured by the robot 100 is displayed on the smartphone, and the user may enlarge and display a particularly desired portion of the captured image relayed live on the smartphone.
  • the robot 100 may transmit the whole sky image itself to the smartphone, and the user may confirm the “what he/she saw” of the robot 100 from the whole sky image.
  • the robot 100 can recognize not only species such as humans and cats but also individual levels such as “who” and “which”. Cats, black cats and white cats, big cats and small cats are treated as different cats.
  • the robot 100 also learns the name of the cat based on the call to the cat by the user. For example, a model that extracts a cat image from a cat name recognized by voice analysis and a captured image when the cat name is recognized and outputs the cat name with the cat image as an input is machine-learned. May be generated. By using such a model, even when there are a plurality of cats, the robot 100 can select the designated cat as an object to be photographed if the user designates and orders a cat name. The user may register the name of the cat and the picture of the cat in advance via a smartphone or the like.
  • the robot 100B may move with respect to the robot 100A.
  • the captured image from the robot 100A and the captured image from the robot 100B are transmitted to the user's smartphone.
  • the robot 100A images a cat
  • the robot 100B near the robot 100A also images the cat by the omnidirectional camera 113.
  • the user can acquire the captured image of the cat not only from the robot 100A but also from the robot 100B simply by remotely controlling the robot 100A.
  • the user can remotely control the robot 100B indirectly by remotely controlling only the robot 100A. This is because the robot 100B has a "following function".
  • the user can set the robot 100 to the remote control mode from the smartphone.
  • the user can also terminate the remote control mode of the robot 100 from the smartphone.
  • the robot 100 in the remote control mode may change the display of the eyes 110.
  • the robot 100 may change the eyes 110 to red eyes, or may display the icon on the eyes 110 to visually represent “being controlled (remotely operated)”.
  • the robot 100 returns the eyes 110 to the normal black-eye display, and returns to the location point at the start of the remote control mode.
  • the remote control mode ends, the robot 100 may sit down, or may shake its head violently to express "a state in which it has regained its ego from control".
  • the remote mode robot 100 does not change emotional parameters or intimacy.
  • the robot 100 When switching to the remote control mode, the robot 100 authenticates the person who requested the remote control. Only if the authentication is successful, the robot 100 switches to the remote control mode.
  • a general authentication method using an account name and a password, or an electronic certificate may be registered in advance in a device used for remote operation, and only access from a device having the electronic certificate may be permitted.
  • the person operating the mobile terminal may be authenticated by confirming that the person operating the mobile terminal is the owner of the robot 100, using a camera or a microphone provided in the mobile terminal such as a smartphone.
  • the remote mode when the remote mode is requested, the users in the vicinity may be allowed to approve the switching to the remote mode. As described above, by sufficiently confirming that the person requesting the remote mode is the owner of the robot 100, it is possible to prevent unintended remote operation by a third party.
  • FIG. 15 to 18 are schematic diagrams for explaining action scenes when a plurality of robots 100 watch over an elderly person.
  • An elderly father lives alone. The only daughter lives apart from her father.
  • There are two robots 100 in the father's house (Fig. 15A).
  • Fig. 15B In the living room at home, her daughter is looking at her smartphone (Fig. 15B).
  • the robots 100 record the life with their father as a life log.
  • Lifelog is a diary that shows what is happening around his father in a way that respects his privacy.
  • the daughter checks the life log on the smartphone (Fig. 15C).
  • the life log contains simple information such as when the father got up and had breakfast.
  • the father holds the robot 100 and loves him (FIG. 15D).
  • the robot 100 may send the captured image of the figure playing with the father to the daughter's smartphone.
  • the robot 100B may serve as a cameraman to capture an image of the robot 100A and the father and transmit the captured image to the daughter's smartphone.
  • FIG. 17A Immediately from the phone, you can hear the humorous voice of your father (Fig. 17A). My father is in the inn. It seems that he just received a call when he took a bath (Fig. 17B). The father tells his daughter that he was in a hot spring with a friend (Fig. 17C). The daughter did not know that her father would go to the hot spring, so she was relieved to know the circumstances (Fig. 17D).
  • the robot 100 records various events that occur in life with a father (elderly person to be watched over) as a life log. This life log records the father's daily routines, such as when he got up and whether he did the gymnastics he always does today. The daughter can confirm whether the father is living as usual through the life log provided by the robot 100.
  • the robot 100 may send the incident notification to the daughter's smartphone.
  • the incident notification may be transmitted when the robot 100 has not been touched by the father for a while, or when the father is lying down at noon. Whether or not the father is lying can be determined by image analysis or temperature sensor analysis.
  • the robot 100 may act so as to increase the viewing opportunities of the father by actively moving around in the room.
  • a life log that abstracts information allows the daughter to check the father's daily life while protecting the father's privacy.
  • the robot 100 does not always need to visually recognize the elderly person when the elderly person is to be watched over.
  • the elderly person lives an independent life, and the robot 100 basically only needs to take an autonomous action. It is considered preferable for the elderly person and the robot 100 to maintain an appropriate sense of distance.
  • the robot 100 may always record a life log when the user desires, not limited to watching.
  • the robot 100 may notify the daughter of the abnormality only when an abnormality occurs in the life of the elderly person.
  • the state management unit 244 raises the approval desire value (the desire to be acknowledged), which is a kind of emotion parameter of the robot 100A.
  • the operation control unit 150 of the robot 100 ⁇ /b>A holds the user in a hug as the approval desire value increases.
  • the robot 100A may stare at the user, approach the user, or wander around the user to ask for a hug. When the user walks, the robot 100A may move following the user.
  • the increase in the approval desire value is externally expressed as a behavioral characteristic of the robot 100 as if it was envious.
  • the robot 100A may positively express "strong MasterCardy” by clinging to the user.
  • the robot 100A may passively express ashamedy by moving to a position away from the user and directing the line of sight from a distance.
  • the expression mode of such ashamedy is determined according to the individuality (initialized individuality or nurtured individuality) of each robot 100. Jealousy may be expressed by "sneaking", and an action expression may be expressed such that the user approaches or leaves for a certain period after the event that causes MasterCardy occurs.
  • the robot 100 may express the behavior of “being tired” by temporarily rejecting the hug.
  • the robot 100 may behave more like ashamedy when the degree of intimacy is higher. For example, it is assumed that the robot 100A has a high degree of intimacy with the user P1 and has a relatively low degree of intimacy with the user P2. At this time, when the user P1 holds the robot 100B, the approval desire value may be higher than when the user P2 holds the robot 100B. According to such a control method, it becomes possible to express an action as if the user has a monopoly desire to monopolize the love of a user he particularly likes.
  • the robot 100 may notify other robots 100 of its own status (emotion parameter, intimacy, event, etc.) (hereinafter, such notification is referred to as “status notification”).
  • the robots 100 may be able to recognize each other's states based on the state notification. For example, the robot 100B notifies the robot 100B of a state such that the robot 100A is held by the user, stroked by the user, or changed by the user, so that the robot 100B can change the state of the robot 100A. Can be grasped.
  • the approval request value (desire to be acknowledged) decreases due to an event such as the robot 100A being held, when the approval request value of the robot 100B is higher than a threshold value, the robot 100B is unique in expressing ashamedy. Show behavioral characteristics.
  • the state management unit 244 of the server 200 collectively manages emotional parameters of each robot 100.
  • the state management unit 244 may internally notify the robot 100B of the emotion parameter value of the robot 100A, or may change the emotion parameter of the robot 100B based on the emotion parameter of the robot 100A. Good.
  • the emotion parameter of the robot 100B may be changed on the condition that the robot 100A is in a position visible from the robot 100B. This is because the robot 100B near the robot 100A expresses a state in which the emotional change of the robot 100A is visually sensed and the emotional parameter of the robot 100B is changed.
  • the robot 100A may notify the robot 100B by short-range wireless communication such as infrared rays.
  • the robot 100B can receive the state notification of the robot 100A only when the robot 100B is near the robot 100A and there is no obstacle blocking the line of sight. Therefore, "the state can be sensed only when the robot 100B is close enough to be visually recognized.” You can express the situation.
  • the robot 100B may detect an event such as the robot 100A being hugged or stroked by the captured image. The robot 100B may change the emotion parameter when the pleasing action with respect to the robot 100A is image-recognized.
  • the robot 100 may not only be ashamed of another robot 100, but also ashamed of a pet or a child.
  • the approval demand value of the robot 100 may be increased even when the user holds a cat.
  • the user may also need to care for the pet in a place where the robot 100 is not looking, or to love the pet and the robot 100 evenly in order not to make the robot 100 ashamed.
  • the user's attachment to the robot 100 can be deepened.
  • the robot 100 of the present embodiment can express the feelings of the robot 100 by its action without having a conversation.
  • the robot 100A may be able to receive the feeling (emotion parameter) of the robot 100B.
  • the server 200 may reflect the change in the emotion parameter of the robot 100B on the action of the robot 100A. For example, the robot 100A may move closer to the robot 100B when the approval desire value of the robot 100B suddenly drops (when it is considered that something has happened to the robot 100B).
  • the robot 100A may increase its own approval desire value (the feeling that the user wants to be acknowledged).
  • the robot 100A and the robot 100B may continuously look at the same object. For example, when the robot 100A looks at a relaxing user, the robot 100B may also look at the same user.
  • the robot 100B is gazing at the user in which the robot 100A is relaxing (communication with the robot 100A or image analysis) (that the head of the robot 100A is in the user's presence direction). It may be detected.
  • the robot B may stare at the user after moving near the robot A. Since the user feels a plurality of eyes, the robots 100 can feel that they have a strong interest in themselves. When the user has not bitten the robots 100 for a long time, the robot 100A and the robot 100B may silently seek the “relationship” by staring at the users at the same time.
  • the robot 100A has a high degree of intimacy with the user P1 that is equal to or higher than a predetermined value
  • the robot 100B has a high degree of familiarity with the user P1 that is equal to or higher than the predetermined value.
  • the more intimate the user of the robot 100 the more chances it is to gaze. Therefore, in the above situation, the robot 100A and the robot 100B have an opportunity to gaze at the user P1 at the same time without hesitation.
  • the robot 100A may notify the robot 100B that it is gazing at the user P1.
  • the robot 100B receives the state notification "looking at the user P1 (also the robot 100A)" from the robot 100A while gazing at the user P1, the robot 100B turns its gaze toward the surprising motion or the direction of the robot 100A.
  • the "accidental coincidence" may be produced by executing a specific motion such as. Further, when the two are gazing at the same user, the motion may be performed so that the robot 100A and the robot 100B approach each other and gaze at the user P1 side by side. When both robots 100 move to the position where the user's face looks as large as possible and stare at the user side by side, strong pressure can be applied to the user.
  • the robot 100A and the robot 100B share the target insect, and the robot 100A and the robot 100B share the target insect.
  • an abnormal interest in insects may be expressed by looking at the insects.
  • the robot 100A and the robot 100B can realize an action expression as if the robots 100 are showing each other by gazing at each other.
  • the robot 100A may notify the robot 100B of a state of "increasing curiosity". At this time, the robot 100B may move closer to the robot 100A, move the hand to touch the robot 100A, and perform a motion as if the robot 100A wants to know the source of curiosity.
  • the robot 100A may notify the robot 100B of the target object of interest in the whole sky image and the direction thereof. When the robot 100B receives this notification, the robot 100B may look at the same object by directing its head or line of sight to the same object as the object of the robot 100A.
  • the robot 100 executes a strong appeal action to the user when a predetermined appeal condition is satisfied, for example, when the approval desire value exceeds a threshold value.
  • the appeal action is an action such as a touch, a voice call, a hug, or the like, which actively seeks the user's involvement with the robot 100. For example, suppose the user is doing exercises such as yoga indoors. When the appealing condition of the robot 100 is satisfied while the user is absorbed in yoga, the robot 100 performs appealing actions such as continuing to gaze at the user and wandering around the user, and requests the interruption of yoga. May be.
  • the robot 100B may follow and move from behind the robot 100A while keeping a constant distance from the robot 100A.
  • the robot 100A may similarly follow and move with respect to the user or the pet.
  • the robot 100A may follow the dog or the user.
  • the robot 100B may follow the robot 100A when the robot 100A is following a dog or the like.
  • the distance between the robot and the tracking target (for example, a dog tracking the user) may be equal to the distance between the tracking target and the tracking target of the tracking target (for example, the user), or may be shorter than the distance by a predetermined length. Alternatively, the distance may be longer than the distance by a predetermined length.
  • the robot 100 When the robot 100 detects that the moving object Q1 and the moving object Q2 are moving in the same direction for a predetermined time or longer, it determines that "following" has occurred. According to such a control method, it becomes possible to express an action that makes the robot 100 feel the instinct that the user wants to keep up with the tracking when the tracking occurs. It is considered that the appearance in which the plurality of robots 100 follow each other is effective in appealing the cuteness of the robot 100 to the user.
  • the robot 100 may perform a follow-up action on condition that the emotion parameter of the robot 100, for example, the value of the emotion parameter indicating curiosity is equal to or less than a threshold value. According to such a control method, it becomes possible to express the behavior that the follow-up action to another robot is executed when the curiosity is weakened and the user is bored, and the follow-up action is not executed when the curiosity is increased. When the follow-up action is executed, the follow-up action may be ended on the condition that the curiosity is increased to a threshold value or more due to various events.
  • the source of the action of the autonomous action type robot is a predetermined parameter indicating the internal state.
  • the parameter indicating curiosity greatly contributes to the source of behavior, but the curiosity parameter may approach 0 if the external environment has little change. In such a case, it is possible to positively change the own parameter by piggybacking on the action of another robot instead of waiting for the change of the own parameter.
  • the plurality of robots 100 may take the same action such as the following action, or change the action characteristics while being influenced by the actions of each other.
  • the states of the robots 100 can be understood in the server 200 or between the robots 100.
  • the robot 100B that has grasped the state of the robot 100A may act in synchronization with the state of the robot 100A, or may act independently without synchronizing.
  • An example of an action in which the robot 100B is synchronized with the robot 100A is that the robot B looks at the same thing as the robot A is gazing at, and the robot B looks at the same object as the object of the robot 100A. Is to execute the motion of.
  • the robot 100 may maintain at least one of its own behavior and state until the user finishes photographing. In this case, instead of maintaining the action or state, the robot 100 may select a specific motion. For example, the robot 100 may turn its body in the direction in which the user is present, or may temporarily stop the cooperative action to cooperate with the user in photographing. In this way, the robot 100 may temporarily stop the operation when detecting the shooting action.
  • the robot 100 may take a pose or temporarily stop the action when the user's shooting action is detected, not limited to the coordinated action.
  • the user can easily upload a captured image of the cute appearance of the robot 100 to an SNS (Social Networking Service) or the like.
  • SNS Social Networking Service
  • various best shot images of the robot 100 related to various things can be easily taken.
  • the outer cover 314 of the robot 100 is configured by accommodating a stretchable base material in a cloth bag.
  • the bag may be made of a flexible material that is warm and comfortable to the user.
  • the base material is preferably a flame-retardant material, and more preferably a material that releases a self-extinguishing gas when the temperature rises.
  • the substrate is composed of flame retardant sponge. Since the outer cover 314 is formed so as to wrap a flame-retardant base material in a cloth bag, self-extinguishing gas is released from the base material even when the cloth bag is ignited, so that the spread of the cloth bag is prevented.
  • the threshold temperature when the substrate generates the self-extinguishing gas is preferably lower than the ignition temperature of the cloth. In this case, when the cloth becomes hot, self-extinguishing gas is generated before the cloth is ignited, so that the cloth can be prevented from igniting.
  • the outer cover 314 By forming the outer cover 314 in a dual structure of a flame-retardant base material and a flexible bag, the warmth of the touch of the robot 100 and the safety against high temperature can both be achieved.
  • the robot system 300 has been described as including one or more robots 100 and one server 200, some of the functions of the robot 100 may be realized by the server 200, or some or all of the functions of the server 200. May be assigned to the robot 100.
  • One server 200 may control a plurality of robots 100, or a plurality of servers 200 may cooperate to control one or more robots 100.
  • a third device other than the robot 100 and the server 200 may take part of the function.
  • the aggregate of the functions of the robot 100 and the functions of the server 200 described with reference to FIG. 4 can be grasped as one “robot” in a broad sense. How to distribute the plurality of functions required to implement the present invention to one or more pieces of hardware is determined in consideration of the processing capability of each piece of hardware and the specifications required for the robot system 300. It may be decided.
  • the “robot in the narrow sense” means the robot 100 that does not include the server 200, but the “robot in the broad sense” means the robot system 300. Many of the functions of the server 200 may possibly be integrated into the robot 100 in the future.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Toys (AREA)

Abstract

In order to provide a user with a sense of security while living with a robot, this autonomous robot comprises: an action control unit that selects robot motion; a drive mechanism that executes the motion selected by the action control unit; a recognition unit that determines whether or not a target object meets prescribed watch-over conditions; a mode setting unit that sets a watch-over mode for the target object when the watch-over conditions have been met; and a communications unit that sends a captured image of the target object to a prescribed communications terminal, in watch-over mode.

Description

[規則26に基づく補充 02.03.2020] ロボット[Replenishment based on Rule 26 02.03.2020] Robot
 本発明は、内部状態または外部環境に応じて自律的に行動選択するロボットに関する。 The present invention relates to a robot that autonomously selects an action according to an internal state or an external environment.
 人間は、癒やしを求めてペットを飼う。その一方、ペットの世話をする時間を十分に確保できない、ペットを飼える住環境にない、アレルギーがある、死別がつらい、といったさまざまな理由により、ペットをあきらめている人は多い。もし、ペットの役割が務まるロボットがあれば、ペットを飼えない人にもペットが与えてくれるような癒やしを与えられるかもしれない(特許文献1、2参照)。 Humans keep pets for healing. On the other hand, many people give up their pets for various reasons, such as not being able to secure sufficient time to care for them, having no living environment for keeping pets, having allergies, and having a difficult bereavement. If there is a robot that plays the role of a pet, healing may be given to those who cannot keep the pet (PATENT DOCUMENTS 1 and 2).
特開2000-323219号公報Japanese Patent Laid-Open No. 2000-323219 国際公開第2017/169826号International Publication No. 2017/169826
 近年、ロボット技術は急速に進歩しつつあるが、ペットのような伴侶としての存在感を実現するには至っていない。ロボットに自由意志があるとは思えないからである。人間は、ペットの自由意志があるとしか思えないような行動を観察することにより、ペットに自由意志の存在を感じ、ペットに共感し、ペットに癒される。 In recent years, robot technology has been rapidly advancing, but it has not achieved the presence as a companion like a pet. It's because I don't think a robot has free will. By observing the behavior of a pet that seems to be free will, humans feel the free will of the pet, empathize with the pet, and are healed by the pet.
 本発明は上記課題認識に基づいて完成された発明であり、その第1の目的は、ロボットとの暮らしにおいてユーザにさまざまな安心感を提供するための技術を提供することにある。また、第2の目的は、ロボットのユーザに対する愛情を豊かに表現するための技術を提供することにある。また、第3の目的は、複数のロボットが協調して行動するための技術を提供することにある。 The present invention is an invention completed based on the above recognition of the problems, and a first object thereof is to provide a technology for providing various senses of security to a user in living with a robot. A second purpose is to provide a technique for expressing the affection of the robot user for the user. A third object is to provide a technique for allowing a plurality of robots to act in a coordinated manner.
 本発明のある態様における自律行動型ロボットは、ロボットのモーションを選択する動作制御部と、動作制御部により選択されたモーションを実行する駆動機構と、対象者が所定の見守り条件を満たすか否かを判定する認識部と、見守り条件が成立したとき、対象者の見守りモードに設定するモード設定部と、見守りモードにおいて、対象者の撮像画像を所定の通信端末に送信する通信部と、を備える。 An autonomous action robot according to an aspect of the present invention includes a motion control unit that selects a motion of a robot, a drive mechanism that executes the motion selected by the motion control unit, and whether or not a target person satisfies a predetermined watching condition. And a communication unit that transmits a captured image of the target person to a predetermined communication terminal in the watching mode, when the watching condition is satisfied, a mode setting unit that sets the watching mode of the target person. ..
 本発明によれば、ロボットの存在感をいっそう高めやすくなる。 According to the present invention, it becomes easier to increase the presence of the robot.
 上述した目的、およびその他の目的、特徴および利点は、以下に述べる好適な実施の形態、およびそれに付随する以下の図面によってさらに明らかになる。 The above-mentioned object and other objects, features and advantages will be further clarified by the preferred embodiments described below and the following drawings accompanying it.
ロボットの正面外観図である。It is a front external view of a robot. ロボットの側面外観図である。It is a side view of a robot. ロボットの構造を概略的に表す断面図である。It is a sectional view showing roughly the structure of a robot. 基本構成におけるロボットのハードウェア構成図である。It is a hardware block diagram of the robot in a basic configuration. 基本構成におけるロボットシステムの機能ブロック図である。It is a functional block diagram of a robot system in a basic configuration. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. 複数のロボットが赤ちゃんを見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches a baby. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. ロボットがお留守番をするときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when the robot is on the answering machine. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 外出中のユーザがロボットを遠隔操作するときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a user who is out is remotely operating a robot. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person. 複数のロボットが高齢者を見守るときの行動シーンを説明するための模式図である。It is a schematic diagram for explaining an action scene when a plurality of robots watches an elderly person.
 本実施形態におけるロボット100は、ユーザと日常を共有し、ときにはユーザに配慮し、ときにはユーザの役に立とうと奮闘し、ユーザの愛情を積極的に求めることで、家族の一員としての存在感を発揮する。
 以下、ロボット100の基本構成について図1から図4に関連して説明したあと、ロボット100のさまざまな行動シーンについて説明する。
The robot 100 according to the present embodiment shares a daily life with the user, sometimes considers the user, sometimes struggles to be useful to the user, and actively seeks the affection of the user, so that the robot 100 has a presence as a member of the family. Demonstrate.
Hereinafter, the basic configuration of the robot 100 will be described with reference to FIGS. 1 to 4, and then various action scenes of the robot 100 will be described.
[基本構成]
 図1は、ロボット100の外観を表す図である。図1Aは正面図であり、図1Bは側面図である。
 ロボット100は、外部環境および内部状態に基づいて行動を決定する自律行動型のロボットである。外部環境は、カメラやサーモセンサ115など各種のセンサにより認識される。内部状態はロボット100の感情を表現する様々なパラメータとして定量化される。ロボット100は、オーナー家庭の家屋内を行動範囲とする。以下、ロボット100に関わる人間を「ユーザ」とよぶ。ユーザのうち、ロボット100の所有者または管理者を「オーナー」とよぶ。
[Basic configuration]
FIG. 1 is a diagram showing an appearance of the robot 100. 1A is a front view and FIG. 1B is a side view.
The robot 100 is an autonomous action type robot that determines an action based on an external environment and an internal state. The external environment is recognized by various sensors such as a camera and a thermo sensor 115. The internal state is quantified as various parameters expressing the emotion of the robot 100. The robot 100 sets the indoor area of the owner's home as an action range. Hereinafter, a person involved in the robot 100 is called a “user”. Of the users, the owner or administrator of the robot 100 is called the “owner”.
 ロボット100のボディ104は、全体的に丸みを帯びた形状を有し、ウレタンやゴム、樹脂、繊維などやわらかく弾力性のある素材により形成された外皮314を含む。ロボット100に服を着せてもよい。ロボット100の総重量は5~15キログラム程度、身長は0.5~1.2メートル程度である。適度な重さと丸み、柔らかさ、手触りのよさ、といった諸属性により、ユーザがロボット100を抱きかかえやすく、かつ、抱きかかえたくなるという効果が実現される。 The body 104 of the robot 100 has a rounded shape as a whole, and includes an outer skin 314 formed of a soft and elastic material such as urethane, rubber, resin, or fiber. The robot 100 may be dressed. The total weight of the robot 100 is about 5 to 15 kilograms, and the height is about 0.5 to 1.2 meters. Due to various attributes such as appropriate weight, roundness, softness, and good feel, the effect that the user can easily hold the robot 100 and that he/she wants to hold the robot 100 is realized.
 ロボット100は、一対の前輪102(左輪102a,右輪102b)と、一つの後輪103を含む。前輪102が駆動輪であり、後輪103が従動輪である。前輪102は、操舵機構を有しないが、左右輪の回転速度や回転方向を個別に制御可能とされている。後輪103は、キャスターであり、ロボット100を前後左右へ移動させるために回転自在となっている。後輪103はオムニホイールであってもよい。左輪102aよりも右輪102bの回転数を大きくすることで、ロボット100が左折したり、左回りに回転できる。右輪102bよりも左輪102aの回転数を大きくすることで、ロボット100が右折したり、右回りに回転できる。 The robot 100 includes a pair of front wheels 102 (left wheel 102a, right wheel 102b) and one rear wheel 103. The front wheels 102 are driving wheels and the rear wheels 103 are driven wheels. The front wheels 102 do not have a steering mechanism, but the rotation speed and rotation direction of the left and right wheels can be individually controlled. The rear wheel 103 is a caster and is rotatable to move the robot 100 back and forth and left and right. The rear wheel 103 may be an omni wheel. By making the rotation speed of the right wheel 102b larger than that of the left wheel 102a, the robot 100 can turn left or rotate counterclockwise. By making the rotation speed of the left wheel 102a larger than that of the right wheel 102b, the robot 100 can turn right or rotate clockwise.
 前輪102および後輪103は、駆動機構(回動機構、リンク機構)によりボディ104に完全収納できる。ボディ104の下半部には左右一対のカバー312が設けられている。カバー312は、可撓性および弾性を有する樹脂材(ラバー、シリコーンゴム等)からなり、柔らかい胴体を構成するとともに前輪102を収納できる。カバー312には側面から前面にかけて開口するスリット313(開口部)が形成され、そのスリット313を介して前輪102を進出させ、外部に露出させることができる。 The front wheel 102 and the rear wheel 103 can be completely housed in the body 104 by the drive mechanism (rotating mechanism, link mechanism). A pair of left and right covers 312 is provided on the lower half of the body 104. The cover 312 is made of a flexible and elastic resin material (rubber, silicone rubber, or the like), constitutes a soft body, and can accommodate the front wheel 102. The cover 312 is formed with a slit 313 (opening) that opens from the side surface to the front surface, and the front wheel 102 can be advanced through the slit 313 and exposed to the outside.
 走行時においても各車輪の大半はボディ104に隠れているが、各車輪がボディ104に完全収納されるとロボット100は移動不可能な状態となる。すなわち、車輪の収納動作にともなってボディ104が降下し、床面Fに着座する。この着座状態においては、ボディ104の底部に形成された平坦状の着座面108(接地底面)が床面Fに当接する。 Most of the wheels are hidden by the body 104 even when the vehicle is running, but when the wheels are completely housed in the body 104, the robot 100 cannot move. That is, the body 104 descends and sits on the floor F as the wheels are retracted. In this seated state, a flat seating surface 108 (ground contact bottom surface) formed on the bottom of the body 104 contacts the floor surface F.
 ロボット100は、2つの腕106を有する。腕106の先端に手があるが、モノを把持する機能はない。腕106は、後述するアクチュエータの駆動により、上げる、曲げる、手を振る、振動するなど簡単な動作が可能である。2つの腕106は、それぞれ個別に制御可能である。 The robot 100 has two arms 106. Although there is a hand at the tip of the arm 106, it does not have a function of grasping an object. The arm 106 can perform simple operations such as raising, bending, waving, and vibrating by driving an actuator described later. The two arms 106 can be individually controlled.
 ロボット100の頭部正面には顔領域116が露出している。顔領域116には、2つの目110が設けられている。目110は、液晶素子または有機EL素子による画像表示が可能であり、画像として表示された瞳や瞼を動かすことで視線や表情を表現するためのデバイスである。顔領域116の中央には、鼻109が設けられている。鼻109には、アナログスティックが設けられており、上下左右の全方向に加えて、押し込み方向も検出できる。また、ロボット100には複数のタッチセンサが設けられており、頭部、胴部、臀部、腕など、ロボット100のほぼ全域についてユーザのタッチを検出できる。ロボット100は、音源方向を特定可能なマイクロフォンアレイや超音波センサなど様々なセンサを搭載する。また、スピーカーを内蔵し、簡単な音声を発することもできる。 A face area 116 is exposed in front of the head of the robot 100. The face area 116 is provided with two eyes 110. The eye 110 is a device capable of displaying an image with a liquid crystal element or an organic EL element, and expressing a line of sight or a facial expression by moving a pupil or an eyelid displayed as an image. A nose 109 is provided in the center of the face area 116. The nose 109 is provided with an analog stick, and can detect the pushing direction in addition to all the directions of up, down, left and right. Further, the robot 100 is provided with a plurality of touch sensors, and a user's touch can be detected on almost the entire area of the robot 100, such as the head, torso, buttocks, and arms. The robot 100 is equipped with various sensors such as a microphone array and an ultrasonic sensor capable of specifying the sound source direction. It also has a built-in speaker and can emit simple voice.
 ロボット100の頭部にはツノ112が取り付けられる。ツノ112には全天周カメラ113が取り付けられ、ロボット100の上部全域を一度に撮像可能である。ツノ112にはまた、サーモセンサ115(サーモカメラ)が内蔵されている。また、ツノ112には赤外線を利用した通信をするためのモジュール(図示せず)が複数設けられており、それらのモジュールが周囲に向けて環状に設置されている。このため、ロボット100は方向を認識しながら赤外線通信ができる。更に、ツノ112には、緊急停止用のスイッチが設けられており、ユーザはツノ112を引き抜くことでロボット100を緊急停止できる。 A horn 112 is attached to the head of the robot 100. A omnidirectional camera 113 is attached to the horn 112 so that the entire upper portion of the robot 100 can be imaged at one time. The horn 112 also has a built-in thermo sensor 115 (thermo camera). Further, the horn 112 is provided with a plurality of modules (not shown) for performing communication using infrared rays, and these modules are annularly installed toward the surroundings. Therefore, the robot 100 can perform infrared communication while recognizing the direction. Further, the horn 112 is provided with a switch for emergency stop, and the user can perform an emergency stop of the robot 100 by pulling out the horn 112.
 図2は、ロボット100の構造を概略的に表す断面図である。
 ボディ104は、本体フレーム310、一対の腕106、一対のカバー312および外皮314を含む。本体フレーム310は、頭部フレーム316および胴部フレーム318を含む。頭部フレーム316は、中空半球状をなし、ロボット100の頭部骨格を形成する。胴部フレーム318は、角筒形状をなし、ロボット100の胴部骨格を形成する。胴部フレーム318の下端部が、ロアプレート334に固定されている。頭部フレーム316は、接続機構330を介して胴部フレーム318に接続されている。
FIG. 2 is a sectional view schematically showing the structure of the robot 100.
The body 104 includes a main body frame 310, a pair of arms 106, a pair of covers 312, and an outer cover 314. The body frame 310 includes a head frame 316 and a body frame 318. The head frame 316 has a hollow hemispherical shape and forms the head skeleton of the robot 100. The body frame 318 has a rectangular tube shape and forms a body skeleton of the robot 100. The lower end of the body frame 318 is fixed to the lower plate 334. The head frame 316 is connected to the body frame 318 via the connection mechanism 330.
 胴部フレーム318は、ボディ104の軸芯を構成する。胴部フレーム318は、ロアプレート334に左右一対のサイドプレート336を固定して構成され、一対の腕106および内部機構を支持する。胴部フレーム318の内方には、バッテリー118、制御回路342および各種アクチュエータ等が収容されている。ロアプレート334の底面が着座面108を形成する。 The body frame 318 constitutes the axis of the body 104. The body frame 318 is configured by fixing a pair of left and right side plates 336 to the lower plate 334, and supports the pair of arms 106 and the internal mechanism. The battery 118, the control circuit 342, various actuators, and the like are housed inside the body frame 318. The bottom surface of the lower plate 334 forms the seating surface 108.
 胴部フレーム318は、その上部にアッパープレート332を有する。アッパープレート332には、有底円筒状の支持部319が固定されている。アッパープレート332、ロアプレート334、一対のサイドプレート336および支持部319が、胴部フレーム318を構成している。支持部319の外径は、左右のサイドプレート336の間隔よりも小さい。一対の腕106は、環状部材340と一体に組み付けられることでアームユニット350を構成している。環状部材340は円環状をなし、その中心線上を径方向に離隔するように一対の腕106が取り付けられている。環状部材340は、支持部319に同軸状に挿通され、一対のサイドプレート336の上端面に載置されている。アームユニット350は、胴部フレーム318により下方から支持されている。 The body frame 318 has an upper plate 332 on its upper part. A cylindrical support portion 319 having a bottom is fixed to the upper plate 332. The upper plate 332, the lower plate 334, the pair of side plates 336, and the support portion 319 form a body frame 318. The outer diameter of the support portion 319 is smaller than the distance between the left and right side plates 336. The pair of arms 106 is integrally assembled with the annular member 340 to form an arm unit 350. The annular member 340 has an annular shape, and the pair of arms 106 are attached so as to radially separate the center line thereof. The annular member 340 is coaxially inserted into the support portion 319 and placed on the upper end surfaces of the pair of side plates 336. The arm unit 350 is supported by the body frame 318 from below.
 頭部フレーム316は、ヨー軸321、ピッチ軸322およびロール軸323を有する。頭部フレーム316のヨー軸321周りの回動(ヨーイング)により首振り動作が実現され、ピッチ軸322周りの回動(ピッチング)により頷き動作,見上げ動作および見下ろし動作が実現され、ロール軸323周りの回動(ローリング)により首を左右に傾げる動作が実現される。各軸は、接続機構330の駆動態様に応じて三次元空間における位置や角度が変化し得る。接続機構330は、リンク機構からなり、胴部フレーム318に設置された複数のモータにより駆動される。 The head frame 316 has a yaw axis 321, a pitch axis 322, and a roll axis 323. The head frame 316 swings around the yaw axis 321 to achieve a swinging motion, and the swinging around the pitch shaft 322 achieves a nod motion, a look-up motion and a look-down motion around the roll shaft 323. The operation of tilting the neck to the left and right is realized by the rotation (rolling). The position and angle of each axis in the three-dimensional space may change according to the driving mode of the connection mechanism 330. The connection mechanism 330 includes a link mechanism and is driven by a plurality of motors installed on the body frame 318.
 胴部フレーム318は、車輪駆動機構370を収容している。車輪駆動機構370は、前輪102および後輪103をそれぞれボディ104から出し入れする前輪駆動機構および後輪駆動機構を含む。前輪102および後輪103は、ロボット100を移動させる「移動機構」として機能する。前輪102は、その中心部にダイレクトドライブモータを有する。このため、左輪102aと右輪102bを個別に駆動できる。前輪102はホイールカバー105に回転可能に支持され、そのホイールカバー105が胴部フレーム318に回動可能に支持されている。 The body frame 318 houses the wheel drive mechanism 370. The wheel drive mechanism 370 includes a front wheel drive mechanism and a rear wheel drive mechanism that move the front wheel 102 and the rear wheel 103 into and out of the body 104, respectively. The front wheels 102 and the rear wheels 103 function as a “moving mechanism” that moves the robot 100. The front wheel 102 has a direct drive motor in the center thereof. Therefore, the left wheel 102a and the right wheel 102b can be driven individually. The front wheel 102 is rotatably supported by the wheel cover 105, and the wheel cover 105 is rotatably supported by the body frame 318.
 一対のカバー312は、胴部フレーム318を左右から覆うように設けられ、ボディ104のアウトラインに丸みをもたせるよう、滑らかな曲面形状とされている。胴部フレーム318とカバー312との間に閉空間が形成され、その閉空間が前輪102の収容空間Sとなっている。後輪103は、胴部フレーム318の下部後方に設けられた収容空間に収容される。 The pair of covers 312 is provided so as to cover the body frame 318 from the left and right, and has a smooth curved surface shape so that the outline of the body 104 is rounded. A closed space is formed between the body frame 318 and the cover 312, and the closed space is a storage space S for the front wheels 102. The rear wheel 103 is housed in a housing space provided in the lower rear part of the body frame 318.
 外皮314は、本体フレーム310および一対の腕106を外側から覆う。外皮314は、人が弾力を感じる程度の厚みを有し、ウレタンスポンジなどの伸縮性を有する素材で形成される。これにより、ユーザがロボット100を抱きしめると、適度な柔らかさを感じ、人がペットにするように自然なスキンシップをとることができる。外皮314は、カバー312を露出させる態様で本体フレーム310に装着されている。外皮314の上端部には、開口部390が設けられる。この開口部390がツノ112を挿通する。 The outer skin 314 covers the body frame 310 and the pair of arms 106 from the outside. The outer cover 314 has a thickness that allows a person to feel elasticity, and is formed of a stretchable material such as urethane sponge. As a result, when the user holds the robot 100, he or she can feel appropriate softness and take a natural skinship like a human being makes a pet. The outer cover 314 is attached to the main body frame 310 in such a manner that the cover 312 is exposed. An opening 390 is provided at the upper end of the outer cover 314. The opening 390 is inserted through the horn 112.
 本体フレーム310と外皮314との間にはタッチセンサが配設される。カバー312にはタッチセンサが埋設されている。これらのタッチセンサは、いずれも静電容量センサであり、ロボット100のほぼ全域におけるタッチを検出する。なお、タッチセンサを外皮314に埋設してもよいし、本体フレーム310の内側に配設してもよい。 A touch sensor is arranged between the body frame 310 and the outer cover 314. A touch sensor is embedded in the cover 312. Each of these touch sensors is a capacitance sensor and detects a touch in almost the entire area of the robot 100. The touch sensor may be embedded in the outer cover 314 or may be provided inside the main body frame 310.
 腕106は、第1関節352および第2関節354を有し、両関節の間に腕356、第2関節354の先に手358を有する。第1関節352は肩関節に対応し、第2関節354は手首関節に対応する。各関節にはモータが設けられ、腕356および手358をそれぞれ駆動する。腕106を駆動するための駆動機構は、これらのモータおよびその駆動回路344を含む。 The arm 106 has a first joint 352 and a second joint 354, and an arm 356 between both joints and a hand 358 at the tip of the second joint 354. The first joint 352 corresponds to the shoulder joint, and the second joint 354 corresponds to the wrist joint. A motor is provided in each joint to drive the arm 356 and the hand 358, respectively. The drive mechanism for driving the arm 106 includes these motors and their drive circuit 344.
 図3は、ロボット100のハードウェア構成図である。
 ロボット100は、内部センサ128、通信機126、記憶装置124、プロセッサ122、駆動機構120およびバッテリー118を含む。駆動機構120は、上述した接続機構330および車輪駆動機構370を含む。プロセッサ122と記憶装置124は、制御回路342に含まれる。各ユニットは電源線130および信号線132により互いに接続される。バッテリー118は、電源線130を介して各ユニットに電力を供給する。各ユニットは信号線132により制御信号を送受する。バッテリー118は、リチウムイオン二次電池であり、ロボット100の動力源である。
FIG. 3 is a hardware configuration diagram of the robot 100.
The robot 100 includes an internal sensor 128, a communication device 126, a storage device 124, a processor 122, a drive mechanism 120, and a battery 118. The drive mechanism 120 includes the connection mechanism 330 and the wheel drive mechanism 370 described above. The processor 122 and the storage device 124 are included in the control circuit 342. Each unit is connected to each other by a power supply line 130 and a signal line 132. The battery 118 supplies power to each unit via the power supply line 130. Each unit sends and receives a control signal via a signal line 132. The battery 118 is a lithium-ion secondary battery and is a power source of the robot 100.
 内部センサ128は、ロボット100が内蔵する各種センサの集合体である。具体的には、カメラ、マイクロフォンアレイ、測距センサ(赤外線センサ)、サーモセンサ115、タッチセンサ、加速度センサ、気圧センサ、ニオイセンサなどである。タッチセンサは、ボディ104の大部分の領域に対応し、静電容量の変化に基づいてユーザのタッチを検出する。ニオイセンサは、匂いの元となる分子の吸着によって電気抵抗が変化する原理を応用した既知のセンサである。 The internal sensor 128 is an assembly of various sensors built in the robot 100. Specifically, it is a camera, a microphone array, a distance measuring sensor (infrared sensor), a thermo sensor 115, a touch sensor, an acceleration sensor, an atmospheric pressure sensor, an odor sensor, or the like. The touch sensor corresponds to most of the area of the body 104 and detects a user's touch based on a change in capacitance. The odor sensor is a known sensor that applies the principle that electric resistance changes due to adsorption of molecules that are the origin of odor.
 通信機126は、各種の外部機器を対象として無線通信を行う通信モジュールである。記憶装置124は、不揮発性メモリおよび揮発性メモリにより構成され、コンピュータプログラムや各種設定情報を記憶する。プロセッサ122は、コンピュータプログラムの実行手段である。駆動機構120は、複数のアクチュエータを含む。このほか、表示器やスピーカーなども搭載される。 The communication device 126 is a communication module that performs wireless communication with various external devices. The storage device 124 includes a non-volatile memory and a volatile memory, and stores a computer program and various setting information. The processor 122 is a means for executing a computer program. The drive mechanism 120 includes a plurality of actuators. In addition to this, a display and speakers are also installed.
 駆動機構120は、主として、車輪と頭部を制御する。駆動機構120は、ロボット100の移動方向や移動速度を変化させるほか、車輪を昇降させることもできる。車輪が上昇すると、車輪はボディ104に完全に収納され、ロボット100は着座面108にて床面Fに当接し、着座状態となる。また、駆動機構120は、腕106を制御する。 The drive mechanism 120 mainly controls the wheels and the head. The drive mechanism 120 can change the moving direction and moving speed of the robot 100, and can also move the wheels up and down. When the wheels are lifted, the wheels are completely stored in the body 104, and the robot 100 comes into contact with the floor surface F at the seating surface 108 and becomes seated. The drive mechanism 120 also controls the arm 106.
 図4は、ロボットシステム300の機能ブロック図である。
 ロボットシステム300は、ロボット100、サーバ200および複数の外部センサ114を含む。ロボット100およびサーバ200の各構成要素は、CPU(Central Processing Unit)および各種コプロセッサなどの演算器、メモリやストレージといった記憶装置、それらを連結する有線または無線の通信線を含むハードウェアと、記憶装置に格納され、演算器に処理命令を供給するソフトウェアによって実現される。コンピュータプログラムは、デバイスドライバ、オペレーティングシステム、それらの上位層に位置する各種アプリケーションプログラム、また、これらのプログラムに共通機能を提供するライブラリによって構成されてもよい。以下に説明する各ブロックは、ハードウェア単位の構成ではなく、機能単位のブロックを示している。
 ロボット100の機能の一部はサーバ200により実現されてもよいし、サーバ200の機能の一部または全部はロボット100により実現されてもよい。
FIG. 4 is a functional block diagram of the robot system 300.
The robot system 300 includes a robot 100, a server 200, and a plurality of external sensors 114. Each constituent element of the robot 100 and the server 200 includes a computing unit such as a CPU (Central Processing Unit) and various coprocessors, a storage device such as a memory and a storage, hardware including a wired or wireless communication line connecting them, and a storage unit. It is realized by software stored in the device and supplying a processing instruction to the arithmetic unit. The computer program may be configured by a device driver, an operating system, various application programs located in their upper layers, and a library that provides common functions to these programs. Each block described below is not a hardware-based configuration but a function-based block.
Some of the functions of the robot 100 may be realized by the server 200, and some or all of the functions of the server 200 may be realized by the robot 100.
 家屋内にはあらかじめ複数の外部センサ114が設置される。サーバ200は、外部センサ114を管理し、必要に応じてロボット100に外部センサ114により取得された検出値を提供する。ロボット100は、内部センサ128および複数の外部センサ114から得られる情報に基づいて、基本行動を決定する。外部センサ114はロボット100の感覚器を補強するためのものであり、サーバ200はロボット100の処理能力を補強するためのものである。ロボット100の通信機126がサーバ200と定期的に通信し、サーバ200が外部センサ114によりロボット100の位置を特定する処理を担ってもよい(特許文献2も参照)。 A plurality of external sensors 114 are installed in advance in the house. The server 200 manages the external sensor 114 and provides the detection value acquired by the external sensor 114 to the robot 100 as needed. The robot 100 determines a basic action based on the information obtained from the internal sensor 128 and the plurality of external sensors 114. The external sensor 114 is for reinforcing the sensory organs of the robot 100, and the server 200 is for reinforcing the processing capacity of the robot 100. The communication device 126 of the robot 100 may periodically communicate with the server 200, and the server 200 may be responsible for the process of identifying the position of the robot 100 by the external sensor 114 (see also Patent Document 2).
(サーバ200)
 サーバ200は、通信部204、データ処理部202およびデータ格納部206を含む。
 通信部204は、外部センサ114およびロボット100との通信処理を担当する。データ格納部206は各種データを格納する。データ処理部202は、通信部204により取得されたデータおよびデータ格納部206に格納されるデータに基づいて各種処理を実行する。データ処理部202は、通信部204およびデータ格納部206のインタフェースとしても機能する。
(Server 200)
The server 200 includes a communication unit 204, a data processing unit 202 and a data storage unit 206.
The communication unit 204 is in charge of communication processing with the external sensor 114 and the robot 100. The data storage unit 206 stores various data. The data processing unit 202 executes various processes based on the data acquired by the communication unit 204 and the data stored in the data storage unit 206. The data processing unit 202 also functions as an interface for the communication unit 204 and the data storage unit 206.
 データ格納部206は、モーション格納部232と個人データ格納部218を含む。
 ロボット100は、複数の動作パターン(モーション)を有する。腕106を震わせる、蛇行しながらオーナーに近づく、首をかしげたままオーナーを見つめる、などさまざまなモーションが定義される。
The data storage unit 206 includes a motion storage unit 232 and a personal data storage unit 218.
The robot 100 has a plurality of motion patterns (motions). Various motions such as swaying the arm 106, approaching the owner while meandering, and staring at the owner with his/her neck bent are defined.
 モーション格納部232は、モーションの制御内容を定義する「モーションファイル」を格納する。各モーションは、モーションIDにより識別される。モーションファイルは、ロボット100のモーション格納部160にもダウンロードされる。どのモーションを実行するかは、サーバ200で決定されることもあるし、ロボット100で決定されることもある。 The motion storage unit 232 stores a “motion file” that defines the motion control content. Each motion is identified by a motion ID. The motion file is also downloaded to the motion storage unit 160 of the robot 100. Which motion is executed may be determined by the server 200 or the robot 100.
 ロボット100のモーションの多くは、複数の単位モーションを含む複合モーションとして構成される。たとえば、ロボット100がオーナーに近づくとき、オーナーの方に向き直る単位モーション、手を上げながら近づく単位モーション、体を揺すりながら近づく単位モーション、両手を上げながら着座する単位モーションの組み合わせとして表現されてもよい。このような4つのモーションの組み合わせにより、「オーナーに近づいて、途中で手を上げて、最後は体をゆすった上で着座する」というモーションが実現される。モーションファイルには、ロボット100に設けられたアクチュエータの回転角度や角速度などが時間軸に関連づけて定義される。モーションファイル(アクチュエータ制御情報)にしたがって、時間経過とともに各アクチュエータを制御することで様々なモーションが表現される。 Most of the motions of the robot 100 are configured as compound motions including a plurality of unit motions. For example, when the robot 100 approaches the owner, it may be expressed as a combination of a unit motion of turning toward the owner, a unit motion of approaching while raising a hand, a unit motion of approaching while shaking the body, and a unit motion of sitting while raising both hands. .. Such a combination of four motions realizes a motion of “approaching the owner, raising his hand on the way, and finally shaking his body to sit down”. In the motion file, the rotation angle, angular velocity, etc. of the actuator provided in the robot 100 are defined in association with the time axis. According to the motion file (actuator control information), various motions are expressed by controlling each actuator over time.
 先の単位モーションから次の単位モーションに変化するときの移行時間を「インターバル」とよぶ。インターバルは、単位モーション変更に要する時間やモーションの内容に応じて定義されればよい。インターバルの長さは調整可能である。
 以下、いつ、どのモーションを選ぶか、モーションを実現する上での各アクチュエータの出力調整など、ロボット100の行動制御に関わる設定のことを「行動特性」と総称する。ロボット100の行動特性は、モーション選択アルゴリズム、モーションの選択確率、モーションファイル等により定義される。
The transition time when changing from the previous unit motion to the next unit motion is called "interval". The interval may be defined according to the time required to change the unit motion and the content of the motion. The length of the interval is adjustable.
Hereinafter, the settings relating to the behavior control of the robot 100, such as when and which motion to select, output adjustment of each actuator in realizing the motion, are collectively referred to as “action characteristics”. The behavior characteristic of the robot 100 is defined by a motion selection algorithm, a motion selection probability, a motion file, and the like.
 モーション格納部232は、モーションファイルのほか、各種のイベントが発生したときに実行すべきモーションを定義するモーション選択テーブルを格納する。モーション選択テーブルにおいては、イベントに対して1以上のモーションとその選択確率が対応づけられる。 The motion storage unit 232 stores, in addition to motion files, a motion selection table that defines motions to be executed when various events occur. In the motion selection table, one or more motions and their selection probabilities are associated with an event.
 個人データ格納部218は、ユーザの情報を格納する。具体的には、ユーザに対する親密度とユーザの身体的特徴・行動的特徴を示すマスタ情報を格納する。年齢や性別などの他の属性情報を格納してもよい。 The personal data storage unit 218 stores user information. Specifically, master information indicating intimacy with the user and physical/behavioral characteristics of the user is stored. Other attribute information such as age and gender may be stored.
 ロボット100は、ユーザごとに親密度という内部パラメータを有する。ロボット100が、自分を抱き上げる、声をかけてくれるなど、自分に対して好意を示す行動を認識したとき、そのユーザに対する親密度が高くなる。ロボット100に関わらないユーザや、乱暴を働くユーザ、出会う頻度が低いユーザに対する親密度は低くなる。 The robot 100 has an internal parameter called intimacy for each user. When the robot 100 recognizes an action that is favorable to itself, such as hugging itself or calling out, the intimacy with the user increases. The degree of intimacy with a user who is not related to the robot 100, a user who works violently, or a user who rarely encounters is low.
 データ処理部202は、位置管理部208、認識部212、動作制御部222、親密度管理部220および状態管理部244を含む。
 位置管理部208は、ロボット100の位置座標を特定する。状態管理部244は、充電率や内部温度、プロセッサ122の処理負荷などの各種物理状態など各種内部パラメータを管理する。また、状態管理部244は、ロボット100の感情(寂しさ、好奇心、承認欲求など)を示すさまざまな感情パラメータを管理する。これらの感情パラメータは常に揺らいでいる。感情パラメータに応じてロボット100の移動目標地点が変化する。たとえば、寂しさが高まっているときには、ロボット100はユーザのいるところを移動目標地点として設定する。
The data processing unit 202 includes a position management unit 208, a recognition unit 212, an operation control unit 222, an intimacy management unit 220, and a state management unit 244.
The position management unit 208 specifies the position coordinates of the robot 100. The state management unit 244 manages various internal parameters such as the charging rate, the internal temperature, and various physical states such as the processing load of the processor 122. In addition, the state management unit 244 manages various emotion parameters indicating the emotion (loneliness, curiosity, desire for approval, etc.) of the robot 100. These emotional parameters are always fluctuating. The movement target point of the robot 100 changes according to the emotion parameter. For example, when loneliness is increasing, the robot 100 sets the place where the user is as the movement target point.
 時間経過によって感情パラメータが変化する。また、後述の応対行為によっても各種感情パラメータは変化する。たとえば、オーナーから「抱っこ」をされると寂しさを示す感情パラメータは低下し、長時間にわたってオーナーを視認しないときには寂しさを示す感情パラメータは少しずつ増加する。 -Emotional parameters change over time. In addition, various emotion parameters also change due to a response action described later. For example, the emotional parameter indicating loneliness decreases when the owner "hugs", and the emotional parameter indicating loneliness gradually increases when the owner is not visually recognized for a long time.
 認識部212は、外部環境を認識する。外部環境の認識には、温度や湿度に基づく天候や季節の認識、光量や温度に基づく物陰(安全地帯)の認識など多様な認識が含まれる。ロボット100の認識部156は、内部センサ128により各種の環境情報を取得し、これを一次処理した上でサーバ200の認識部212に転送する。 The recognition unit 212 recognizes the external environment. The recognition of the external environment includes various recognitions such as recognition of weather and season based on temperature and humidity, and recognition of shade (safety zone) based on light intensity and temperature. The recognition unit 156 of the robot 100 acquires various environmental information by the internal sensor 128, performs primary processing on the environmental information, and transfers the environmental information to the recognition unit 212 of the server 200.
 具体的には、ロボット100の認識部156は、画像から移動物体、特に、人物や動物に対応する画像領域を抽出し、抽出した画像領域から移動物体の身体的特徴や行動的特徴を示す特徴量の集合として「特徴ベクトル」を抽出する。特徴ベクトル成分(特徴量)は、各種身体的・行動的特徴を定量化した数値である。たとえば、人間の目の横幅は0~1の範囲で数値化され、1つの特徴ベクトル成分を形成する。人物の撮像画像から特徴ベクトルを抽出する手法については、既知の顔認識技術の応用である。ロボット100は、特徴ベクトルをサーバ200に送信する。 Specifically, the recognition unit 156 of the robot 100 extracts a moving object, in particular, an image region corresponding to a person or an animal from the image, and a feature indicating a physical feature or a behavioral feature of the moving object from the extracted image region. A "feature vector" is extracted as a set of quantities. The feature vector component (feature amount) is a numerical value that quantifies various physical and behavioral features. For example, the width of the human eye is digitized in the range of 0 to 1 to form one feature vector component. The method of extracting the feature vector from the captured image of a person is an application of a known face recognition technique. The robot 100 transmits the feature vector to the server 200.
 サーバ200の認識部212は、ロボット100の内蔵カメラによる撮像画像から抽出された特徴ベクトルと、個人データ格納部218にあらかじめ登録されているユーザ(クラスタ)の特徴ベクトルと比較することにより、撮像されたユーザがどの人物に該当するかを判定する(ユーザ識別処理)。また、認識部212は、ユーザの表情を画像認識することにより、ユーザの感情を推定する。認識部212は、人物以外の移動物体、たとえば、ペットである猫や犬についてもユーザ識別処理を行う。 The recognition unit 212 of the server 200 is imaged by comparing the feature vector extracted from the image captured by the built-in camera of the robot 100 with the feature vector of the user (cluster) registered in the personal data storage unit 218 in advance. It is determined which person the user corresponds to (user identification process). Further, the recognition unit 212 estimates the emotion of the user by recognizing the facial expression of the user as an image. The recognition unit 212 also performs user identification processing on moving objects other than people, such as cats and dogs that are pets.
 認識部212は、ロボット100になされたさまざまな応対行為を認識し、快・不快行為に分類する。認識部212は、また、ロボット100の行動に対するオーナーの応対行為を認識することにより、肯定・否定反応に分類する。
 快・不快行為は、ユーザの応対行為が、生物として心地よいものであるか不快なものであるかにより判別される。たとえば、抱っこされることはロボット100にとって快行為であり、蹴られることはロボット100にとって不快行為である。肯定・否定反応は、ユーザの応対行為が、ユーザの快感情を示すものか不快感情を示すものであるかにより判別される。抱っこされることはユーザの快感情を示す肯定反応であり、蹴られることはユーザの不快感情を示す否定反応である。
The recognition unit 212 recognizes various kinds of response actions performed on the robot 100 and classifies them into pleasant and unpleasant actions. The recognition unit 212 also recognizes the owner's response to the action of the robot 100, and classifies the action into an affirmative/negative response.
The pleasant/unpleasant behavior is determined depending on whether the user's behavior is pleasant or uncomfortable as a living thing. For example, hugging is a pleasant act for the robot 100, and being kicked is an unpleasant act for the robot 100. The affirmative/negative reaction is determined by whether the user's response action indicates the user's pleasant or unpleasant feeling. Hugging is an affirmative reaction indicating the user's pleasant feeling, and being kicked is a negative reaction indicating the user's unpleasant feeling.
 サーバ200の動作制御部222は、ロボット100の動作制御部150と協働して、ロボット100のモーションを決定する。サーバ200の動作制御部222は、ロボット100の移動目標地点とそのための移動ルートを作成する。動作制御部222は、複数の移動ルートを作成し、その上で、いずれかの移動ルートを選択してもよい。 The operation control unit 222 of the server 200 cooperates with the operation control unit 150 of the robot 100 to determine the motion of the robot 100. The operation control unit 222 of the server 200 creates a movement target point of the robot 100 and a movement route therefor. The operation control unit 222 may create a plurality of movement routes and select any one of the movement routes.
 動作制御部222は、モーション格納部232の複数のモーションからロボット100のモーションを選択する。各モーションには状況ごとに選択確率が対応づけられている。たとえば、オーナーから快行為がなされたときには、モーションAを20%の確率で実行する、気温が30度以上となったとき、モーションBを5%の確率で実行する、といった選択方法が定義される。 The motion control unit 222 selects a motion of the robot 100 from a plurality of motions in the motion storage unit 232. A selection probability is associated with each motion for each situation. For example, a selection method is defined in which when the owner makes a pleasant action, the motion A is executed with a probability of 20%, and when the temperature is 30 degrees or more, the motion B is executed with a probability of 5%. ..
 親密度管理部220は、ユーザごとの親密度を管理する。上述したように、親密度は個人データ格納部218において個人データの一部として登録される。快行為を検出したとき、親密度管理部220はそのオーナーに対する親密度をアップさせる。不快行為を検出したときには親密度はダウンする。また、長期間視認していないオーナーの親密度は徐々に低下する。 The intimacy degree management unit 220 manages the intimacy degree for each user. As described above, the degree of intimacy is registered in the personal data storage unit 218 as a part of personal data. When the pleasant behavior is detected, the intimacy degree management unit 220 increases the intimacy degree with respect to the owner. The intimacy decreases when an offensive behavior is detected. Moreover, the intimacy of owners who have not been visually recognized for a long period of time gradually decreases.
(ロボット100)
 ロボット100は、通信部142、データ処理部136、データ格納部148、内部センサ128および駆動機構120を含む。
 通信部142は、通信機126(図3参照)に該当し、外部センサ114、サーバ200および他のロボット100との通信処理を担当する。データ格納部148は各種データを格納する。データ格納部148は、記憶装置124(図3参照)に該当する。データ処理部136は、通信部142により取得されたデータおよびデータ格納部148に格納されているデータに基づいて各種処理を実行する。データ処理部136は、プロセッサ122およびプロセッサ122により実行されるコンピュータプログラムに該当する。データ処理部136は、通信部142、内部センサ128、駆動機構120およびデータ格納部148のインタフェースとしても機能する。
(Robot 100)
The robot 100 includes a communication unit 142, a data processing unit 136, a data storage unit 148, an internal sensor 128, and a drive mechanism 120.
The communication unit 142 corresponds to the communication device 126 (see FIG. 3) and is in charge of communication processing with the external sensor 114, the server 200, and the other robot 100. The data storage unit 148 stores various data. The data storage unit 148 corresponds to the storage device 124 (see FIG. 3). The data processing unit 136 executes various processes based on the data acquired by the communication unit 142 and the data stored in the data storage unit 148. The data processing unit 136 corresponds to the processor 122 and a computer program executed by the processor 122. The data processing unit 136 also functions as an interface for the communication unit 142, the internal sensor 128, the drive mechanism 120, and the data storage unit 148.
 データ格納部148は、ロボット100の各種モーションを定義するモーション格納部160を含む。
 ロボット100のモーション格納部160には、サーバ200のモーション格納部232から各種モーションファイルがダウンロードされる。モーションは、モーションIDによって識別される。前輪102を収容して着座する、腕106を持ち上げる、2つの前輪102を逆回転させることで、あるいは、片方の前輪102だけを回転させることでロボット100を回転行動させる、前輪102を収納した状態で前輪102を回転させることで震える、ユーザから離れるときにいったん停止して振り返る、などのさまざまなモーションを表現するために、各種アクチュエータ(駆動機構120)の動作タイミング、動作時間、動作方向などがモーションファイルにおいて時系列定義される。
 データ格納部148には、個人データ格納部218からも各種データがダウンロードされてもよい。
The data storage unit 148 includes a motion storage unit 160 that defines various motions of the robot 100.
Various motion files are downloaded from the motion storage unit 232 of the server 200 to the motion storage unit 160 of the robot 100. The motion is identified by the motion ID. A state in which the front wheel 102 is stowed, the arm 106 is lifted, the two front wheels 102 are reversely rotated, or only one of the front wheels 102 is rotated to rotate the robot 100, and the front wheel 102 is stored. In order to express various motions such as trembling by rotating the front wheel 102, stopping and turning when leaving the user, the operation timing, operation time, operation direction, etc. of various actuators (drive mechanism 120) are It is defined in time series in the motion file.
Various data may be downloaded from the personal data storage unit 218 to the data storage unit 148.
 データ処理部136は、認識部156および動作制御部150を含む。
 ロボット100の動作制御部150は、サーバ200の動作制御部222と協働してロボット100のモーションを決める。一部のモーションについてはサーバ200で決定し、他のモーションについてはロボット100で決定してもよい。また、ロボット100がモーションを決定するが、ロボット100の処理負荷が高いときにはサーバ200がモーションを決定するとしてもよい。サーバ200においてベースとなるモーションを決定し、ロボット100において追加のモーションを決定してもよい。モーションの決定処理をサーバ200およびロボット100においてどのように分担するかはロボットシステム300の仕様に応じて設計すればよい。
The data processing unit 136 includes a recognition unit 156 and an operation control unit 150.
The operation control unit 150 of the robot 100 cooperates with the operation control unit 222 of the server 200 to determine the motion of the robot 100. The server 200 may determine some motions, and the robot 100 may determine other motions. Although the robot 100 determines the motion, the server 200 may determine the motion when the processing load of the robot 100 is high. The base motion may be determined in the server 200 and the additional motion may be determined in the robot 100. How the server 200 and the robot 100 share the motion determination process may be designed according to the specifications of the robot system 300.
 ロボット100の動作制御部150は選択したモーションを駆動機構120に実行指示する。駆動機構120は、モーションファイルにしたがって、各アクチュエータを制御する。 The operation control unit 150 of the robot 100 instructs the drive mechanism 120 to execute the selected motion. The drive mechanism 120 controls each actuator according to the motion file.
 動作制御部150は、親密度の高いユーザが近くにいるときには「抱っこ」をせがむ仕草として両方の腕106をもちあげるモーションを実行することもできるし、「抱っこ」に飽きたときには左右の前輪102を収容したまま逆回転と停止を交互に繰り返すことで抱っこをいやがるモーションを表現することもできる。駆動機構120は、動作制御部150の指示にしたがって前輪102や腕106、首(頭部フレーム316)を駆動することで、ロボット100にさまざまなモーションを表現させる。 The motion control unit 150 can also perform a motion of lifting both arms 106 as a gesture of "hugging" when a user with high intimacy is nearby, and when the user gets tired of "hugging", the left and right front wheels 102 are moved. By alternately repeating the reverse rotation and the stop while the robot is housed, it is possible to express a motion to hate the hug. The drive mechanism 120 drives the front wheel 102, the arm 106, and the neck (head frame 316) in accordance with an instruction from the operation control unit 150 to cause the robot 100 to express various motions.
 ロボット100の認識部156は、内部センサ128から得られた外部情報を解釈する。認識部156は、視覚的な認識(視覚部)、匂いの認識(嗅覚部)、音の認識(聴覚部)、触覚的な認識(触覚部)が可能である。 The recognition unit 156 of the robot 100 interprets external information obtained from the internal sensor 128. The recognition unit 156 can perform visual recognition (visual part), odor recognition (olfactory part), sound recognition (auditory part), and tactile recognition (tactile part).
 認識部156は、移動物体の撮像画像から特徴ベクトルを抽出する。上述したように、特徴ベクトルは、移動物体の身体的特徴と行動的特徴を示すパラメータ(特徴量)の集合である。移動物体を検出したときには、ニオイセンサや内蔵の集音マイク、温度センサ等からも身体的特徴や行動的特徴が抽出される。これらの特徴も定量化され、特徴ベクトル成分となる。認識部156は、特許文献2等に記載の既知の技術に基づいて、特徴ベクトルからユーザを特定する。 The recognition unit 156 extracts the feature vector from the captured image of the moving object. As described above, the feature vector is a set of parameters (feature amounts) indicating the physical features and behavioral features of the moving object. When a moving object is detected, physical characteristics and behavioral characteristics are extracted from an odor sensor, a built-in sound collecting microphone, a temperature sensor, and the like. These features are also quantified and become feature vector components. The recognition unit 156 identifies the user from the feature vector based on a known technique described in Patent Document 2 or the like.
 検出・分析・判定を含む一連の認識処理のうち、ロボット100の認識部156は認識に必要な情報の取捨選択や抽出を行い、判定等の解釈処理はサーバ200の認識部212により実行される。認識処理は、サーバ200の認識部212だけで行ってもよいし、ロボット100の認識部156だけで行ってもよいし、上述のように双方が役割分担をしながら上記認識処理を実行してもよい。 Of a series of recognition processing including detection, analysis, and determination, the recognition unit 156 of the robot 100 selects or extracts information necessary for recognition, and interpretation processing such as determination is executed by the recognition unit 212 of the server 200. .. The recognition processing may be performed only by the recognition unit 212 of the server 200, or may be performed only by the recognition unit 156 of the robot 100. As described above, both sides perform the above-described recognition processing while sharing roles. Good.
 ロボット100に対する強い衝撃が与えられたとき、認識部156はタッチセンサおよび加速度センサによりこれを認識し、サーバ200の認識部212は、近隣にいるユーザによって「乱暴行為」が働かれたと認識する。ユーザがツノ112を掴んでロボット100を持ち上げるときにも、乱暴行為と認識してもよい。ロボット100に正対した状態にあるユーザが特定音量領域および特定周波数帯域にて発声したとき、サーバ200の認識部212は、自らに対する「声掛け行為」がなされたと認識してもよい。また、体温程度の温度を検知したときにはユーザによる「接触行為」がなされたと認識し、接触認識した状態で上方への加速度を検知したときには「抱っこ」がなされたと認識する。ユーザがボディ104を持ち上げるときの物理的接触をセンシングしてもよいし、前輪102にかかる荷重が低下することにより抱っこを認識してもよい。
 まとめると、ロボット100は内部センサ128によりユーザの行為を物理的情報として取得し、サーバ200の認識部212は快・不快を判定する。また、サーバ200の認識部212は特徴ベクトルに基づくユーザ識別処理を実行する。
When a strong impact is applied to the robot 100, the recognition unit 156 recognizes this by the touch sensor and the acceleration sensor, and the recognition unit 212 of the server 200 recognizes that “a violent act” is performed by a user in the vicinity. When the user holds the horn 112 and lifts the robot 100, it may be recognized as a violent act. When the user facing the robot 100 utters a voice in a specific sound volume region and a specific frequency band, the recognition unit 212 of the server 200 may recognize that a “calling action” has been performed on itself. Further, when a temperature around the body temperature is detected, it is recognized that the "contact action" has been performed by the user, and when an upward acceleration is detected in the state of contact recognition, it is recognized that the "hugging" has been performed. Physical contact may be sensed when the user lifts the body 104, or the hug may be recognized when the load applied to the front wheel 102 is reduced.
In summary, the robot 100 acquires the action of the user as physical information by the internal sensor 128, and the recognition unit 212 of the server 200 determines the comfort/discomfort. The recognition unit 212 of the server 200 also executes a user identification process based on the feature vector.
 サーバ200の認識部212は、ロボット100に対するユーザの各種応対を認識する。各種応対行為のうち一部の典型的な応対行為には、快または不快、肯定または否定が対応づけられる。一般的には快行為となる応対行為のほとんどは肯定反応であり、不快行為となる応対行為のほとんどは否定反応となる。快・不快行為は親密度に関連し、肯定・否定反応はロボット100の行動選択に影響する。 The recognition unit 212 of the server 200 recognizes various responses of the user to the robot 100. Some typical response actions among various response actions are associated with pleasantness or discomfort, affirmation or denial. In general, most of the pleasant behaviors are affirmative reactions, and most of the offensive behaviors are negative responses. Pleasure/discomfort is related to intimacy, and positive/negative reactions influence behavior selection of the robot 100.
 認識部156により認識された応対行為に応じて、サーバ200の親密度管理部220はユーザに対する親密度を変化させる。原則的には、快行為を行ったユーザに対する親密度は高まり、不快行為を行ったユーザに対する親密度は低下する。 The intimacy degree management unit 220 of the server 200 changes the intimacy degree with respect to the user according to the response action recognized by the recognition unit 156. In principle, the degree of intimacy with respect to the user who has performed a pleasant act increases, and the degree of intimacy with respect to the user who has performed an unpleasant act decreases.
 サーバ200の各機能は、その機能を実現するためのプログラムがメモリにロードされ実体化(インスタンス化)することで実現される。サーバ200の処理能力により、ロボット100による各種処理を補う。サーバ200は、ロボット100のリソースとして利用できる。サーバ200のリソースをどのように利用するかはロボット100からのリクエストに応じて動的に決められる。たとえば、ロボット100において、多数のタッチセンサからの検出値に応じて複雑なモーションを連続的に生成する必要がある場合、ロボット100におけるプロセッサ122の処理をモーションの選択・生成に優先的に割り当て、周囲の状況を画像認識するための処理はサーバ200の認識部212でおこなうとしてもよい。このように、ロボット100とサーバ200の間でロボットシステム300の各種処理を分散化できる。 Each function of the server 200 is realized by loading a program for realizing the function into the memory and instantiating (instantiating) the program. Various processing by the robot 100 is supplemented by the processing capacity of the server 200. The server 200 can be used as a resource of the robot 100. How to use the resources of the server 200 is dynamically determined according to a request from the robot 100. For example, in the robot 100, when it is necessary to continuously generate complex motions according to detection values from a large number of touch sensors, the processing of the processor 122 in the robot 100 is preferentially assigned to motion selection/generation, The recognition unit 212 of the server 200 may perform the processing for image recognition of the surrounding situation. In this way, various processes of the robot system 300 can be distributed between the robot 100 and the server 200.
 単一のサーバ200にて複数のロボット100を制御することもできる。この場合、サーバ200の各機能は、ロボット100ごとに独立して実体化される。たとえば、サーバ200はロボット100Aのための認識部212(インスタンス・オブジェクト)とは別にロボット100Bのための認識部212を用意してもよい。 A single server 200 can also control multiple robots 100. In this case, each function of the server 200 is materialized independently for each robot 100. For example, the server 200 may prepare the recognition unit 212 for the robot 100B separately from the recognition unit 212 (instance object) for the robot 100A.
 以上の基本構成を前提として、次に、本実施形態におけるロボット100の実装について、特に、本実装の特徴と目的および基本構成との相違点を中心として説明する。 Given the above basic configuration, the implementation of the robot 100 in the present embodiment will be described next, focusing on the features and purposes of this implementation, and the differences from the basic configuration.
[SLAM]
 本実施形態のロボット100は全天周カメラ113によって定期的に周辺を撮像することにより多数の撮像画像(静止画像)を取得する。ロボット100は、撮像画像に基づく記憶(以下、「画像記憶」とよぶ)を形成する。
[SLAM]
The robot 100 of the present embodiment acquires a large number of captured images (still images) by periodically capturing an image of the surroundings with the omnidirectional camera 113. The robot 100 forms a memory (hereinafter referred to as “image memory”) based on the captured image.
 画像記憶は、複数のキーフレームの集合体である。キーフレームは、撮像画像における特徴点(特徴量)の分布情報である。本実施形態のロボット100は、画像特徴量を用いたグラフベースのSLAM(Simultaneous Localization and Mapping)技術、より具体的には、ORB(Oriented FAST and Rotated BRIEF)特徴量に基づくSLAM技術によりキーフレームを形成する(特許文献3参照)。 Image memory is a collection of multiple keyframes. The key frame is distribution information of feature points (feature amount) in the captured image. The robot 100 of the present embodiment uses a graph-based SLAM (Simultaneous Localization and Mapping) technology using image features, more specifically, a SLAM technique based on ORB (Oriented FAST and Rotated BRIEF) features to form keyframes. It is formed (see Patent Document 3).
 ロボット100は、移動しながらキーフレームを定期的に形成することにより、キーフレームの集合体、いいかえれば、画像特徴分布として画像記憶を形成する。ロボット100は、現在地点において取得したキーフレームと、既に保有している多数のキーフレームを比較することにより、現在地点を推定する。すなわち、ロボット100は、実際に視認している撮像画像とかつて視認した撮像画像(記憶)を比較し、自らの現在の状況と過去の記憶を整合させることで「空間認識」を行う。特徴点の集合体として形成される画像記憶は、いわゆるマップ(地図)となる。ロボット100は、現在地点を推定ながら移動しつつ、マップを更新する。 The robot 100 periodically forms a key frame while moving to form an aggregate of key frames, in other words, an image memory as an image feature distribution. The robot 100 estimates the current point by comparing the key frame acquired at the current point with a large number of key frames already held. That is, the robot 100 performs “spatial recognition” by comparing the captured image that is actually visually recognized with the captured image (memory) that is visually recognized once, and matching the present situation with the past memory. The image memory formed as a set of feature points is a so-called map. The robot 100 updates the map while moving while estimating the current position.
 基本構成のロボット100は、キーフレームではなく外部センサ114により位置を認識することが前提となっている。本実施形態のロボット100は、キーフレームのみに基づいて場所を認識するものとして説明する。 The basic configuration of the robot 100 is premised on recognizing the position by the external sensor 114 instead of the key frame. The robot 100 of the present embodiment will be described as recognizing a place based only on a key frame.
 本実施形態におけるロボット100は、各種モードを設定するための「モード設定部」を備える。 The robot 100 in this embodiment includes a “mode setting unit” for setting various modes.
<赤ちゃんの見守り>
 図5から図7は、複数のロボット100が赤ちゃんを見守るときの行動シーンを説明するための模式図である。
 まず、子供部屋では、バウンサー(Bouncer)のなかで見守りの対象である赤ちゃん(以下「対象者」という)が眠っている(図5A)。
 この家には2台のロボット100がいる。2台のロボット100は、それぞれ画像認識を介して赤ちゃん(乳児)の存在及び位置を認識する。2台のロボット100は、相互通信を介して赤ちゃんの位置を共有してもよい。2台のロボット100は、赤ちゃんの位置から、それぞれのロボットの注視点を決定する。2台のロボット100は、相互通信を介して、互いの注視点が同一または所定の範囲内となるように、注視点を補正する。当該決定されたそれぞれの注視点に頭部を向けるように2台のロボット100が移動や部位を制御することで、2台のロボット100が赤ちゃんを覗き込むような動作が実現される(図5B)。
 2台のロボット100のうちの一方のロボット100Aは、赤ちゃんの位置に応じてロボット100Aの頭部を赤ちゃんの存在方向に向けることで、赤ちゃんを見守り続ける。ロボット100Aは赤ちゃんのそばから離れないように、赤ちゃんとの距離を一定に保ち続ける。他方のロボット100Bは、ロボット100Aとの通信を介して役割分担をすることで、しばらくすると見守りをやめて遊び始めているような動作を実行する(図5C)。
 お母さんは、ロボット100たちに赤ちゃんの見守りを任せて台所で料理を作っている(図5D)。
<Baby watching>
5 to 7 are schematic diagrams for explaining action scenes when a plurality of robots 100 watch a baby.
First, in the children's room, a baby (hereinafter referred to as the “subject”) who is the target of watching in the bouncer is sleeping (Fig. 5A).
There are two robots 100 in this house. The two robots 100 recognize the presence and position of a baby (infant) through image recognition. The two robots 100 may share the baby's position through mutual communication. The two robots 100 determine the gazing point of each robot from the position of the baby. Through the mutual communication, the two robots 100 correct the gazing points so that the gazing points of the two robots are the same or within a predetermined range. The two robots 100 control the movements and parts so that the heads are directed to the respective determined gaze points, and thus an operation in which the two robots 100 look into the baby is realized (FIG. 5B). ).
One of the two robots 100, 100A, keeps watching over the baby by directing the head of the robot 100A toward the baby's presence direction according to the position of the baby. The robot 100A keeps a constant distance from the baby so as not to get near the baby. The other robot 100B divides the role through communication with the robot 100A, and after a while, performs an operation in which watching is stopped and a play is started (FIG. 5C).
The mother is cooking in the kitchen, leaving the robots 100 to watch over the baby (Fig. 5D).
 お母さんは、台所にスマートフォン(通信端末)を置いている(図6A)。ロボット100A(見守り中)は、全天周カメラ113により赤ちゃんを撮像し、撮像画像をライブ映像としてスマートフォンに送り続ける。スマートフォンでは、全天周画像のうち、赤ちゃんが写っている領域を表示する。この時、全天周画像のゆがみを平面に表示するように補正された画像がスマートフォンに表示されてもよい。 Mom has a smartphone (communication terminal) in the kitchen (Fig. 6A). The robot 100A (during watching) captures an image of the baby with the omnidirectional camera 113 and continues to send the captured image to the smartphone as a live image. On a smartphone, the area in which the baby is reflected is displayed in the entire sky image. At this time, the image corrected so that the distortion of the whole sky image is displayed on a plane may be displayed on the smartphone.
 ここで、突然、赤ちゃんが泣き出す(図6B)。
 遊んでいた100Bは、マイクを介して赤ちゃんの泣き声を聞くと(音声を収集すると)、赤ちゃんの存在位置に基づいて赤ちゃんに近づく(図6C)。ロボット100Bは、干渉モーションを実行する。ロボット100Bは、泣声のような音声を収集したという条件、画像解析によって赤ちゃんが泣いているなどの特定の状態にあると判定されたという条件の少なくとも1つが満たされた場合に、干渉モーションを実行する。干渉モーションは、赤ちゃんの気を引くような所定の音声を出力する、子守歌のような音声を出力する、腕106を振る、頭や体をゆらす、バウンサーを揺らす、などのあやすためのモーションである。干渉モーションにより赤ちゃんの気を紛らわせる。また、干渉モーションを実行することにより、ロボット100Bが泣いている赤ちゃんをなんとかしようと奮闘している、という印象を第三者にもたせることができる。ロボットBに代えて又は加えて、ロボットAが干渉モーションを実行してもよい。
 ロボット100Aから中継されるライブ映像により、赤ちゃんの泣いている姿がお母さんのスマートフォンに映し出される(図6D)。
 赤ちゃんが泣きだす前後の動画(赤ちゃんが泣きだす時点の所定時間前から所定時間後までの動画)が、HDDなどに記憶されることにより永続化されてもよい。
Suddenly, the baby begins to cry (Fig. 6B).
100B who was playing approaches the baby based on the position of the baby when hearing the baby crying (collecting the voice) through the microphone (FIG. 6C). The robot 100B executes an interference motion. The robot 100B executes the interference motion when at least one of the condition that a voice such as a cry is collected and the condition that the baby is in a specific state such as crying by image analysis are satisfied. To do. The interfering motion is a motion for awakening such as outputting a predetermined sound that catches the baby's attention, outputting a sound like a lullaby, shaking the arm 106, shaking the head or body, and shaking the bouncer. is there. Distract the baby by the interference motion. In addition, by performing the interference motion, it is possible to give a third party the impression that the robot 100B is struggling to manage the crying baby. Instead of or in addition to the robot B, the robot A may perform the interference motion.
A live image relayed from the robot 100A shows the baby crying on his mother's smartphone (Fig. 6D).
The videos before and after the baby starts crying (videos from a predetermined time before the baby begins to cry to a predetermined time after the baby starts crying) may be made permanent by being stored in the HDD or the like.
 お母さんは、驚いて思わずスマートフォンを手にとる(図7A)。
 お母さんは慌てて子供部屋に行き、赤ちゃんのもとに駆けつける(図7B)。
 お母さんは赤ちゃんを抱っこする(図7C)。
 ロボット100たちはお母さんと赤ちゃんのそばにいてお母さんたちを見つめる。2台のロボットは、画像解析によって、お母さん、赤ちゃんのそれぞれの存在位置を認識し、当該それぞれの存在位置から注視点を設定する。2台のロボット100は、通信を介して注視点を共有及び必要に応じて互いの注視点が同一または所定の範囲内となるように補正する。2台のロボット100がそれぞれの注視点に頭部を向ける動作を実行する。この結果、2台のロボット100が並んでお母さんと赤ちゃんを見つめることにより、ロボット100たちが赤ちゃんの心配をしている、という印象をお母さんにもたせることができる。
 お母さんに抱っこされた赤ちゃんは再び眠りに落ちる(図7D)。
 ロボット100たちは、赤ちゃんが眠りに落ちたことを確認するとバウンサーのそばから離れる(図7E)。
The mother was surprised to grab the smartphone (Fig. 7A).
The mother hurriedly went to the nursery and rushed to the baby (Fig. 7B).
The mother holds the baby (Fig. 7C).
The robots 100 are beside the mother and the baby and stare at them. The two robots recognize the respective positions of the mother and baby by image analysis, and set the gazing point from the respective positions. The two robots 100 share the gazing point via communication and correct the mutual gazing points so that they are within the same range or within a predetermined range as necessary. The two robots 100 perform an action of directing their heads to the respective gazing points. As a result, when the two robots 100 line up and look at the mother and the baby, it is possible to give the mother the impression that the robots 100 are anxious about the baby.
The baby hugged by the mother falls asleep again (Fig. 7D).
When the robots 100 confirm that the baby fell asleep, they leave the bouncer (FIG. 7E).
 ロボット100は、お母さんから「見守りモード」に入力スイッチやスマートフォンなどを介して手動で設定されたとき、上述した見守り行動を開始するために、赤ちゃんを捜索してもよい。ロボット100は、赤ちゃんがバウンサーにいることを検出したとき、モード設定部は自動的に見守りモードに設定してもよい。ロボット100は、画像認識により乳幼児を検出し、かつその乳幼児の近くに保護者がいない場合に、自動的に見守りモードに移行してもよい。その乳幼児の近くに保護者がいない場合とは、例えば、所定以上の年齢の人間の画像が検出されない場合、または乳幼児に関連付けられた人物の画像が検出されない場合などであってもよい。また、複数のロボット100が同じ部屋にいるときには、すべてのロボット100を見守りモードに設定してもよいし、一部のロボット100だけを見守りモードに設定してもよい。 The robot 100 may search for the baby in order to start the above-mentioned watching action when manually set by the mother to the “watching mode” via the input switch or the smartphone. When the robot 100 detects that the baby is in the bouncer, the mode setting unit may automatically set the watching mode. The robot 100 may automatically shift to the watching mode when an infant is detected by image recognition and there is no guardian near the infant. The case where there is no guardian near the baby may be, for example, a case where an image of a person of a predetermined age or more is not detected, or a case where an image of a person associated with the baby is not detected. When a plurality of robots 100 are in the same room, all the robots 100 may be set to the watching mode, or only some of the robots 100 may be set to the watching mode.
 ロボット100は、撮像画像において「赤ちゃん」が検出され、他のユーザあるいは母親や父親などの特定のユーザが検出されていないことを見守り条件として、見守りモードに移行してもよい。 The robot 100 may shift to the watching mode as a watching condition that "baby" is detected in the captured image and no other user or a specific user such as a mother or father is detected.
 見守り中のロボット100は、見守り対象である赤ちゃんを視認可能な範囲内に行動範囲を限定してもよい。あるいは、ロボット100は、見守り中においては赤ちゃんのいる部屋から出ないとしてもよい。少なくとも、ロボット100は、見守り中は赤ちゃんから目を離さない(例えば、常に赤ちゃんをカメラの画角に捉える)ことが望ましい。ロボット100は、全天周カメラ113、マイク及び温度センサなどの外部環境を測定するセンサを用いて対象者の状態に注視する。赤ちゃんが動き回る場合には、ロボット100は、全天周カメラ113、マイク及び温度センサなどの外部環境を測定するセンサを用いて、赤ちゃんの位置を認識し、自己の方向を赤ちゃんの方向を向くように制御してもよい。また、ロボット100は、赤ちゃんが自己の所定の距離内に入った場合には、車輪を収容するなどして、赤ちゃんと接触する箇所を削減してもよい。 The robot 100 being watched may limit the action range within a range in which the baby being watched can be visually recognized. Alternatively, the robot 100 may not leave the room with the baby while watching. At least, it is desirable that the robot 100 keep an eye on the baby (for example, always catch the baby in the angle of view of the camera) while watching. The robot 100 pays attention to the state of the target person by using sensors that measure the external environment, such as the omnidirectional camera 113, a microphone, and a temperature sensor. When the baby moves around, the robot 100 recognizes the position of the baby using a sensor for measuring the external environment such as the omnidirectional camera 113, a microphone, and a temperature sensor so that the robot 100 can turn itself toward the baby. It may be controlled to. Further, the robot 100 may reduce the number of places where the baby comes into contact with the baby, for example, by accommodating the wheels when the baby comes within a predetermined distance.
 ロボット100は、所定の警戒条件が成立したときには、赤ちゃんに積極的に関わるためのモーションを実行する。あるいは、お母さんのスマートフォンなど他の通信端末に「警戒情報」を送信してもよい。警戒条件は任意に設定可能であるが、たとえば、赤ちゃん(見守りの対象者)が泣いたとき、階段に近づいたとき、外に出ようとするとき、小さなものをいじっているとき、ころんだときなど、赤ちゃんに危険がせまる状況が考えられる。積極的に関わるためのモーションは、前述した干渉モーションであってもよい。 The robot 100 executes a motion for positively engaging with the baby when a predetermined warning condition is satisfied. Alternatively, the “warning information” may be transmitted to another communication terminal such as a mother's smartphone. The alert condition can be set arbitrarily, for example, when a baby (target person to be watched) crying, when approaching stairs, when trying to get out, when messing with a small object, when falling, etc. , There may be situations where the baby is at risk. The motion for actively engaging may be the above-described interference motion.
 ロボット100Aは、見守りモードに移行するとき、見守りモードに移行した旨をロボット100Bに通知してもよい。通知を受けたとき、ロボット100Bも見守りモードに移行してもよいし、ロボット100Aや赤ちゃんの位置に近づいてもよい。複数のロボット100が赤ちゃんのそばに集まることで、ロボット100たちが赤ちゃんのことを気にかけているかのような行動表現が可能となる。 When the robot 100A shifts to the watching mode, the robot 100A may notify the robot 100B of the shift to the watching mode. When receiving the notification, the robot 100B may also shift to the watching mode, or may approach the positions of the robot 100A and the baby. By gathering a plurality of robots 100 near a baby, it is possible to express an action as if the robots 100 care about the baby.
 このとき、ロボット100の動作音で赤ちゃんを起こさないように、ロボット100は、駆動機構120(特に、移動や姿勢調整に関わる機構)の動作量を通常モードの場合よりも抑制し、通常時より動作音をたてないように静粛に動作する。赤ちゃんが目を覚ましているときに見守りモードに移行した場合、ロボットBは赤ちゃんの側で、赤ちゃんの興味を引くような特定のモーション、たとえば、バウンサーの周りを走り回るモーションやダンスを実行してもよい。例えば、音声解析で赤ちゃんの音声を検出するという条件や、画像解析で赤ちゃんが眼を開けている画像を検出したことなどを条件を充足する場合に赤ちゃんが目を覚ましていると判定されうる。赤ちゃんが寝ているときにはそのようなモーションの実行はしない。ロボットBは、見守り対象者の状態に応じて、適切なモーションを実行する。赤ちゃんも、多数のロボット100が自分の周りに集まってくれれば、安心すると考えられる。 At this time, the robot 100 suppresses the operation amount of the drive mechanism 120 (in particular, a mechanism relating to movement and posture adjustment) more than in the normal mode so that the baby does not wake up due to the operation sound of the robot 100, It operates quietly so as not to make operation noise. If the baby enters the watch mode while he is awake, Robot B may perform certain motions on the baby's side that may be of interest to the baby, such as running around a bouncer or dancing. Good. For example, it can be determined that the baby is awake if the condition that the baby's voice is detected by voice analysis, or the condition that the baby has detected an image with the eyes open by image analysis is satisfied. No such motion is performed when the baby is sleeping. The robot B executes an appropriate motion according to the state of the watching target person. A baby is also considered to be at ease if many robots 100 gather around him.
 ロボット100は、マイクなどを介して、お母さんなどの特定のユーザから「赤ちゃんを見ていてね」など特定の音声コマンドを受け付けたときに見守りモードに移行してもよい。ロボット100は見守り開始時には、画像解析などで検出した赤ちゃんの位置に基づいて、赤ちゃんの周りをまわる、赤ちゃんの方を向く、赤ちゃんに手358を向ける、などの確認行動を実行してもよい。お母さん(指示者)は、確認行動により、ロボット100が見守りの対象者を間違えていないか判断できる。お母さんは、正しければ「任せたよ」などの肯定的返事、間違っていれば「違うよ」などの否定的返事を発声すると考えられる。このため、ロボット100は、確認行動後にマイクで収集されたお母さんの返事に基づいて見守りの対象が正しいか否かを判断してもよい。 The robot 100 may shift to the watching mode when it receives a specific voice command such as “Watch your baby” from a specific user such as a mother via a microphone or the like. At the start of watching, the robot 100 may perform a confirmation action such as circling around the baby, facing the baby, or pointing the hand 358 to the baby based on the position of the baby detected by image analysis or the like. The mother (instructor) can determine whether or not the robot 100 is mistaken for the person being watched by the confirmation action. It is considered that the mother utters a positive reply such as "I left it to you" if correct, and a negative reply such as "No" if wrong. Therefore, the robot 100 may determine whether or not the target of watching is correct based on the reply of the mother collected by the microphone after the confirmation action.
 見守り中には、ロボット100は赤ちゃんを覗き込むなど、ロボット100の視線を赤ちゃんに維持する。赤ちゃんはロボット100に見守られていることを信じて安心する。お母さんはロボット100が健気に赤ちゃんを見守り続けている姿をみて、ロボット100に対する親愛の情、信頼の情をかきたてられる。見守り行動は、「ロボット100が一生懸命仕事をしている」というユーザ(目撃者)に対するアピールとしての意味もある。 While watching, the robot 100 keeps the line of sight of the robot 100 at the baby, such as looking into the baby. The baby is relieved to believe that he is being watched by the robot 100. By seeing the robot 100 healthily watching over the baby, the mother can provoke affection and trust for the robot 100. The watching action also has a meaning as an appeal to the user (witness) that "the robot 100 is working hard".
 上述したように、ロボット100Aの見守り中に、ロボット100Bは自由に遊んでもよい。ただし、ロボット100Bの行動範囲は、赤ちゃんまたはロボット100Aを視認可能な範囲に制限される。ロボット100Aは赤ちゃんのアップのライブ画像(接写画像)をスマートフォンに送信し、ロボット100Bは赤ちゃんを見守るロボット100Aのライブ画像(広角画像)をスマートフォンに送信してもよい。ロボット100Bは、更に、ロボット100Aによる赤ちゃんの撮影の邪魔にならないように行動範囲を制限することが望ましい。たとえば、ロボット100Bは、赤ちゃんとロボット100Aの位置を特定し、赤ちゃんとロボット100Aとを結ぶ直線上に被らないように行動すればよい。また、ロボット100Aが、撮影中の画像を画像認識することにより、赤ちゃんの姿がロボット100Bにより認識できなくなった場合、ロボット100Aがロボット100Bに対して移動を要求する命令を通知してもよい。その命令に応じて、ロボット100Bは移動する。 As described above, the robot 100B may freely play while watching the robot 100A. However, the action range of the robot 100B is limited to the range in which the baby or the robot 100A can be visually recognized. The robot 100A may send a live image of the baby up (close-up image) to the smartphone, and the robot 100B may send the live image of the robot 100A watching the baby (wide-angle image) to the smartphone. It is desirable that the robot 100B further limit the range of action so as not to interfere with the shooting of the baby by the robot 100A. For example, the robot 100B may identify the positions of the baby and the robot 100A, and act so as not to be covered on the straight line connecting the baby and the robot 100A. Further, when the robot 100A cannot recognize the baby's figure by the image recognition of the image being captured by the robot 100B, the robot 100A may notify the robot 100B of a command to request movement. The robot 100B moves according to the command.
 赤ちゃんが所定の状態になったことを画像解析や音声解析で検出した場合、たとえば、赤ちゃんが泣き始めたときには、ロボット100Aはライブ映像に加えて、赤ちゃんの状態が変わったことを通知する。ロボット100Bは干渉モーションを実行してもよい。ロボット100Aは、「赤ちゃんが泣いている」「赤ちゃんがぐずり始めた」などの警戒条件に関連付けられた状態を端的に示すメッセージをスマートフォンに通知する。ロボット100Bは、干渉モーションとして、ダンスをしてもよいし、内蔵のオーディオプレイヤにより子守唄などの音楽を流してもよい。いずれにしてもロボット100Bは赤ちゃんの気をそらす行動を実行することで赤ちゃんをあやす。お母さんが戻ってくれば(図7B参照)、ロボット100Bは干渉モーションを中止する。具体的には、ロボット100Bは、「お母さん」を撮像画像において確認できたとき、干渉モーションを中止する。
 お母さんが「ありがとう」「もう大丈夫」などの見守り行動の完了を指示するキーワード(以下「完了コマンド」という)を発音したことをマイクなどを介して認識したときに、ロボット100Bは干渉モーションを中止するとしてもよい。このとき、見守り行動をロボット100Aに指示した本人から、完了コマンドが発声されたか否かを判定し、本人からの指示であれば見守り行動を完了することとしてもよい。
When it is detected by image analysis or voice analysis that the baby is in a predetermined state, for example, when the baby begins to cry, the robot 100A notifies the baby that the state has changed in addition to the live image. The robot 100B may perform an interference motion. The robot 100A notifies the smartphone of a message that briefly indicates a state associated with a warning condition such as "the baby is crying" or "the baby has started to crawl". The robot 100B may perform a dance as the interference motion, or may play music such as a lullaby with an internal audio player. In any case, the robot 100B soothes the baby by performing a distracting action on the baby. When the mother returns (see FIG. 7B), the robot 100B stops the interference motion. Specifically, the robot 100B stops the interference motion when "Mother" can be confirmed in the captured image.
The robot 100B stops the interfering motion when it recognizes, through a microphone, that the mother has pronounced a keyword (hereinafter referred to as “completion command”) that indicates completion of the watching action such as “thank you” or “it is okay”. May be At this time, it may be determined whether or not the completion command is uttered by the person who has instructed the robot 100A to perform the watching action, and if the instruction is from the person, the watching action may be completed.
 ロボット100は、撮像画像から赤ちゃんを画像認識する。「赤ちゃん」の検出方法は、既知の顔認識技術の応用により実現すればよい。また、ロボット100は、赤ちゃんが移動しているときには、見守り中であるか否かに関わらず常に赤ちゃんを視認可能となるように移動方向を調整してもよい。 The robot 100 recognizes the baby from the captured image. The “baby” detection method may be realized by applying a known face recognition technology. Further, the robot 100 may adjust the moving direction so that the baby can always be visually recognized when the baby is moving regardless of whether the baby is watching or not.
 ロボット100Aとロボット100Bは交代で見守りを実行してもよい。たとえば、ロボット100Aが10分間の見守りを行ったとき、ロボット100Bが見守りモードに移行し、ロボット100Aは自由行動をしてもよい。複数のロボット100が交代で見守る方が、赤ちゃんを退屈させないと考えられる。また、ロボット100Aとロボット100Bが協力して見守っているかのような感覚を第三者にもたせやすくなる。 The robot 100A and the robot 100B may take turns watching. For example, when the robot 100A performs watching for 10 minutes, the robot 100B may shift to the watching mode and the robot 100A may take a free action. It is considered that the baby is not bored when the plurality of robots 100 alternately watch. In addition, it becomes easy for a third party to have a feeling as if the robot 100A and the robot 100B are watching together.
 ロボット100が赤ちゃんを見守ってくれれば、育児に忙しいお母さんでも、安心して家事に専念しやすくなる。赤ちゃんから離れた場所で洗濯などの家事をしていると、赤ちゃんが泣いていてもすぐに気づかないことがある。泣いていることに気づかないうちに赤ちゃんは本格的に泣き始めてしまうことがある。このような状況はお母さんにとって大変な負担を強いられる。ロボット100が赤ちゃんを見守ることで、赤ちゃんの「ぐずり」など大泣きの予兆をすばやく察知できる。  If the robot 100 watches over the baby, it will be easier for mothers who are busy raising children to concentrate on housework. If you do household chores such as washing away from your baby, you may not notice immediately when your baby is crying. A baby may start crying in earnest without realizing that he is crying. This situation puts a great burden on the mother. By the robot 100 watching over the baby, it is possible to quickly detect the sign of the baby crying, such as the baby's "chuckling".
 お母さんだけでなくロボット100も育児に参加する。また、ロボット100に見守られながら育った赤ちゃんは、将来的にはロボット100に対して親近感ももつのではないかと期待される。 Not only mother but robot 100 participates in childcare. In addition, it is expected that the baby raised while being watched by the robot 100 will have a sense of familiarity with the robot 100 in the future.
<お留守番>
 図8から図11は、ロボット100がお留守番をするときの行動シーンを説明するための模式図である。
 2台のロボット100A、100Bと1人の女性オーナーが暮らす家庭を想定する。女性オーナーは仕事にでかける(図8A)。このとき、女性オーナーは近くにいたロボット100Aに「お留守番してね」と声をかける。
 ロボット100Aはこの言葉を聞く(マイクを介して音声を収集する)ことで、女性オーナーが出かけることを認識し「留守番モード」に移行する(図8B)。
 女性オーナーは玄関から出かける(図8C)。ロボット100Aは、玄関まで移行し、所定モーションを実行することで女性オーナーをお見送りする。
<Answer>
8 to 11 are schematic diagrams for explaining an action scene when the robot 100 makes an answering machine.
Assume a home in which two robots 100A and 100B and one female owner live. The female owner goes to work (Fig. 8A). At this time, the female owner calls out to the nearby robot 100A, "Keep me away."
The robot 100A recognizes that the female owner is going out by listening to this word (collecting voice through the microphone), and shifts to the "answering mode" (FIG. 8B).
The female owner goes out from the front door (Fig. 8C). The robot 100A shifts to the entrance and executes a predetermined motion to drop off the female owner.
 一方、部屋にいたロボット100Bは、ロボット100Aを追いかけて遊びだす(図8D)。なお、女性オーナーが出かけるときにはロボット100Aとロボット100Bの双方がお見送りをしてもよいし、女性オーナーに対して親密度が所定値以上となっているロボット100のみがお見送りをするとしてもよい。 On the other hand, the robot 100B in the room starts playing after chasing the robot 100A (Fig. 8D). When the female owner goes out, both the robot 100A and the robot 100B may give up, or only the robot 100 whose intimacy with the female owner is a predetermined value or more may give up. ..
 しばらくすると、留守番中に玄関の扉が開く(図9A)。
 音声解析または画像解析等によって玄関の扉があいたことを認識した場合には、2台のロボット100は女性オーナーの帰宅を「期待」して玄関に移動する(図9B)。
 しかし、現れたの(画像解析等によって識別された人物)は女性オーナーではなく、大きい袋をもった見知らぬ人物(不審者)であるとする(図9C)。
 このとき、ロボット100は、不審者に十分近づいて不審者を撮影する(図9D)。ロボット100は警戒モードに移行する。警戒モードに移行すると、ロボット100たちは、当該不審者の近くに移動し、当該不審者の撮影を続けてもよい。
After a while, the front door opens during the answering machine (Fig. 9A).
When the robot 100 recognizes that there is a front door by voice analysis or image analysis, the two robots 100 move to the front door in anticipation of the return of the female owner (FIG. 9B).
However, it is assumed that the person (the person identified by the image analysis or the like) who appears is not the female owner but a stranger (suspicious person) with a large bag (FIG. 9C).
At this time, the robot 100 approaches the suspicious person sufficiently to photograph the suspicious person (FIG. 9D). The robot 100 shifts to the alert mode. After shifting to the alert mode, the robots 100 may move to the vicinity of the suspicious individual and continue shooting the suspicious individual.
 女性オーナーはオフィスで仕事をしている。ロボット100は、不審者の撮影画像を女性オーナーのスマートフォンに送信する(図10A)。
 就業中の女性オーナーは、スマートフォンでロボット100が不審者を見つけたことを知る(図10B)。この場合、不審者だと思ったのは女性オーナーの母親だったとする。ロボット100たちは女性オーナーの母親を知らない。女性はスマートフォンからロボット100たちに「不審者ではない」と通知する。女性オーナーは「お母さんだから心配ないよ」と音声によりロボット100に不審者でない旨を伝えてもよい。ロボット100は警戒モードを解除する。
 母親は手料理を作る(図10C)。
A female owner works in the office. The robot 100 transmits the photographed image of the suspicious individual to the smartphone of the female owner (FIG. 10A).
A female owner who is working learns that the robot 100 has found a suspicious person with a smartphone (FIG. 10B). In this case, it is assumed that the woman owner's mother thought to be a suspicious person. The robots 100 do not know the mother of the female owner. The woman notifies the robots 100 from the smartphone that the person is not a suspicious person. The female owner may inform the robot 100 that he or she is not a suspicious person, by saying, "I'm a mom, so don't worry." The robot 100 releases the alert mode.
The mother cooks home cooking (Figure 10C).
 女性オーナー(娘)は帰宅し、母親と手料理を食べながら談笑する(図11A)。
 二人のそばでロボット100たちが遊んでいる(図11B、図11C)。
The female owner (daughter) returns home and chats with her mother while eating home cooking (Fig. 11A).
The robots 100 are playing next to each other (Figs. 11B and 11C).
 ロボット100は、画像解析や音声認識を介して、留守番中に不審者(過去に視認したことがない人や、親密度が閾値以下の人)をみつけたとき、不審者を撮影し、女性オーナー(特定ユーザ)のスマートフォンに不審者の撮像画像を送信する。このような制御方法によれば、女性オーナーは留守宅の警備をロボット100に安心して任せることができる。このほかにも、地震は発生してモノが壊れた、ガス漏れが起きた、宅配業者などの来客(インターフォンからの呼びかけ)など、画像解析や音声認識を介して、ロボット100は留守中に生じたさまざまなイベントを検出し、当該イベントを知らせる内容をオーナーのスマートフォンに通知する。また、ロボット100は、留守中のイベントをライフログとして記録しておき、オーナーは帰宅後にスマートフォンを介してライフログを確認することで留守中の出来事を確認してもよい。 When a suspicious person (a person who has not been visually recognized in the past or a person whose intimacy degree is equal to or lower than a threshold value) is found in the answering machine through image analysis or voice recognition, the robot 100 photographs the suspicious person and The captured image of the suspicious individual is transmitted to the (specific user) smartphone. According to such a control method, the female owner can leave the security of the absence home to the robot 100 with peace of mind. In addition to this, the robot 100 is out of the office through image analysis and voice recognition, such as when an earthquake occurs, something is broken, a gas leak occurs, a customer (call from the intercom) such as a courier, etc., through image analysis and voice recognition. It detects various events and notifies the owner's smartphone of the content that informs the event. Further, the robot 100 may record an absence event as a life log, and the owner may confirm the absence event by checking the life log via the smartphone after returning home.
 ロボット100もモード設定部は、ユーザからの操作入力に基づいて留守番モードを設定してもよい。ロボット100は、ユーザが玄関から出かけるという特定のイベントを検出したとき、自動的に留守番モードに設定変更してもよい。あるいは、ロボット100は、一定時間以上、室内においてユーザを視認できなかったときには自動的に留守番モードに設定変更してもよい。 The mode setting unit of the robot 100 may set the answering mode based on the operation input from the user. The robot 100 may automatically change the setting to the answering machine mode when detecting a specific event that the user goes out of the front door. Alternatively, the robot 100 may automatically change the setting to the answering machine mode when the user cannot be visually recognized in the room for a certain time or longer.
 ロボット100は、警戒モードにおいては、不審者を撮影可能な位置に行動範囲を設定してもよい。不審者の行動を見逃さないためである。また、不審者からの暴力を防ぐため、不審者から離れて行動するとしてもよい。不審者通報をしたあと、ユーザから「不審者ではない」旨の通知をうけたとき、ロボット100は警戒モードを解除し、留守番モードに戻る。このあとは、ロボット100は未確認人物(元・不審者)に対して通常の関わり方をしてもよい。また、未確認人物の姿を記憶し、親密度等のパラメータの管理をしてもよい。未確認人物になついてもよい。図8から図11に示した例の場合、母親はロボット100に当初は警戒されるけれども、女性オーナー(娘)からロボット100に対して承認通知がなされたあとは、ロボット100は母親にまとわりつきはじめる。母親は、ロボット100に一転して受け入れられた、歓迎されていると感じることができる。 In the alert mode, the robot 100 may set the action range at a position where a suspicious person can be photographed. This is so that the behavior of a suspicious person is not overlooked. Moreover, in order to prevent violence from a suspicious person, you may act away from the suspicious person. After the suspicious person is notified, when the user receives a notification that the user is not a suspicious person, the robot 100 cancels the warning mode and returns to the answering machine mode. After that, the robot 100 may normally engage with the unidentified person (former/suspicious person). Further, the figure of an unidentified person may be stored and parameters such as intimacy may be managed. You may follow an unidentified person. In the case of the examples shown in FIGS. 8 to 11, although the mother is initially warned by the robot 100, after the female owner (daughter) gives the robot 100 an approval notice, the robot 100 starts to cling to the mother. .. The mother can feel welcomed after being turned over to the robot 100.
 留守番モードにおいては、遠隔地にいるユーザ(オーナー)は、スマートフォンを介してロボット100に室内確認指示を送信してもよい。ロボット100は室内確認指示を受信したときには、SLAMに基づいて生成されたマップにしたがい、室内を巡回する。このとき、ロボット100は撮像画像をユーザのスマートフォンに送信してもよいし、異常事象の有無を通知してもよい。ユーザは、室内確認指示を送信することにより、いつでも自宅の様子を確認できる。 In the answering machine, a user (owner) at a remote location may send an indoor confirmation instruction to the robot 100 via a smartphone. When the robot 100 receives the indoor confirmation instruction, the robot 100 patrolles the room according to the map generated based on the SLAM. At this time, the robot 100 may transmit the captured image to the smartphone of the user, or may notify the presence/absence of an abnormal event. The user can confirm the state of the home at any time by transmitting the indoor confirmation instruction.
<遠隔操作>
 図12から図14は、外出中のユーザがロボット100を遠隔操作するときの行動シーンを説明するための模式図である。
 留守宅では2台のロボット100A、100Bがお留守番をしている。この家には猫もいる(図12A)。
 一方、残りの家族はロボット100たちと猫を残して街にでかける(図12B)。 男の子は浮かない顔をしている(図12C)。
 男の子はスマートフォン(携帯端末)を取り出す。男の子はスマートフォンをなにやら操作し始める(図12D)。
<Remote operation>
12 to 14 are schematic diagrams for explaining an action scene when a user who is out and about operates the robot 100 remotely.
In the absence house, two robots 100A and 100B are in the absence. There are cats in this house (Fig. 12A).
On the other hand, the rest of the family leave the robots 100 and the cat and go out to the city (Fig. 12B). The boy has a face that does not float (Fig. 12C).
The boy takes out the smartphone (mobile terminal). The boy begins to manipulate the smartphone (Fig. 12D).
 留守宅では2台のロボット100が遊んでいる(図13A)。
 ロボット100Aが動きだすと、ロボット100Bはロボット100Aについていく(図13B)。ロボット100Aとロボット100Bは追いかけっこをして遊んでいる。
 母親は男の子(息子)の様子が気にかかる(図13C)。
 男の子は出かけるとき、猫にあまり元気がなかったことを気にしている(図13D)。
Two robots 100 are playing in the absence house (Fig. 13A).
When the robot 100A starts moving, the robot 100B follows the robot 100A (FIG. 13B). The robot 100A and the robot 100B are chasing and playing.
The mother cares about the appearance of the boy (son) (Fig. 13C).
When the boy goes out, he is concerned that the cat was not very well (Fig. 13D).
 男の子は、スマートフォンからロボット100に「猫の様子を確認して」という命令を送信する。ロボット100A、ロボット100Bは、命令に示された地点又は対象を撮影するために移動し、適宜撮影を行う。スマートフォンにはロボット100A、ロボット100Bの撮像画像が送られてくる。キャットツリーで猫は元気に遊んでいる(図14A)。
 猫の元気な姿をみて家族は安心する(図14B)。
 キャットタワーの上で遊ぶ猫をロボット100が撮影する。ロボット100は猫をアップで撮影し、猫はロボット100を見つめる(図14C、図14D)。
The boy sends a command from the smartphone to the robot 100 to "check the appearance of the cat". The robot 100A and the robot 100B move to capture an image of a point or an object indicated by the command, and capture an image as appropriate. Images captured by the robot 100A and the robot 100B are sent to the smartphone. Cats are playing well in the cat tree (Fig. 14A).
The family is reassured by the lively appearance of the cat (Fig. 14B).
The robot 100 shoots a cat playing on the cat tower. The robot 100 photographs the cat up close, and the cat gazes at the robot 100 (FIGS. 14C and 14D).
 このように、ユーザはスマートフォンからロボット100に各種の指示を送ることができる。特に、ユーザは、ロボット100に室内確認を命令できる。上記実施例のように「猫の様子を確認して」という命令を送信した場合、ロボット100は「猫」に相当するオブジェクトを撮影画像から検出し、猫を中心とした撮像画像をスマートフォンに送信する。このような命令は、音声コマンドであってもよいし、スマートフォンが備えるグラフィカルユーザインタフェースから入力されてもよい。ユーザは、ロボット100をラジオコントロールカーのように操作できてもよい(以下、このような操作方法を「遠隔操作」とよぶ)。 In this way, the user can send various instructions from the smartphone to the robot 100. In particular, the user can instruct the robot 100 to check the room. When the command “check the appearance of a cat” is transmitted as in the above embodiment, the robot 100 detects an object corresponding to “cat” from the captured image and transmits the captured image centering on the cat to the smartphone. To do. Such an instruction may be a voice command or may be input from a graphical user interface included in the smartphone. The user may operate the robot 100 like a radio control car (hereinafter, such an operation method is referred to as “remote operation”).
 ロボット100による撮像画像はスマートフォンで表示され、ユーザはスマートフォンにライブ中継される撮像画像のうちの特に見たい部分を拡大表示してもよい。ロボット100は、全天周画像そのものをスマートフォンに送信し、ユーザは全天周画像によりロボット100の「見たもの」を確認してもよい。 The image captured by the robot 100 is displayed on the smartphone, and the user may enlarge and display a particularly desired portion of the captured image relayed live on the smartphone. The robot 100 may transmit the whole sky image itself to the smartphone, and the user may confirm the “what he/she saw” of the robot 100 from the whole sky image.
 ロボット100は、人間、猫、などの種だけでなく、「だれ」「どれ」という個体レベルまで認識できる。猫も、黒い猫と白い猫、大きい猫と小さい猫は別の猫として扱われる。また、ユーザによる猫への呼びかけに基づいてロボット100は猫の名前も学習する。例えば、音声解析により認識した猫の名前と、猫の名前を認識したときの撮像画像とから、猫の画像を抽出し、当該猫の画像を入力として猫の名前を出力するモデルを機械学習により生成してもよい。このようなモデルを用いることで、猫が複数いるときでも、ユーザが猫の名前を指定して命令すれば、ロボット100は指定された猫を撮影対象として選ぶことができる。ユーザは、あらかじめ猫の名前と猫の写真とをスマートフォンなどを介して登録しておいてもよい The robot 100 can recognize not only species such as humans and cats but also individual levels such as “who” and “which”. Cats, black cats and white cats, big cats and small cats are treated as different cats. The robot 100 also learns the name of the cat based on the call to the cat by the user. For example, a model that extracts a cat image from a cat name recognized by voice analysis and a captured image when the cat name is recognized and outputs the cat name with the cat image as an input is machine-learned. May be generated. By using such a model, even when there are a plurality of cats, the robot 100 can select the designated cat as an object to be photographed if the user designates and orders a cat name. The user may register the name of the cat and the picture of the cat in advance via a smartphone or the like.
 ユーザがロボット100Aを遠隔操作するとき、ロボット100Bはロボット100Aについて移動するとしてもよい。ユーザのスマートフォンにはロボット100Aからの撮像画像とロボット100Bからの撮像画像が送信される。ロボット100Aが猫を撮像するとき、ロボット100Aの近くにいるロボット100Bも全天周カメラ113により猫を撮像する。ユーザはロボット100Aを遠隔操作するだけで、ロボット100Aだけでなくロボット100Bからも猫の撮像画像を取得できる。ユーザはロボット100Aのみを遠隔操作することで、ロボット100Bも間接的に遠隔操作できる。これは、ロボット100Bに「追従機能」をもたせたためである。 When the user remotely operates the robot 100A, the robot 100B may move with respect to the robot 100A. The captured image from the robot 100A and the captured image from the robot 100B are transmitted to the user's smartphone. When the robot 100A images a cat, the robot 100B near the robot 100A also images the cat by the omnidirectional camera 113. The user can acquire the captured image of the cat not only from the robot 100A but also from the robot 100B simply by remotely controlling the robot 100A. The user can remotely control the robot 100B indirectly by remotely controlling only the robot 100A. This is because the robot 100B has a "following function".
 ユーザは、スマートフォンからロボット100を遠隔操作モードに設定できる。また、ユーザはスマートフォンからロボット100の遠隔操作モードを終了させることもできる。遠隔操作モードのロボット100は、目110の表示を変化させてもよい。たとえば、ロボット100は目110を赤目に変更してもよいし、目110にアイコンを表示させることにより「支配されている(遠隔操作されている)」ことを視覚的に表現してもよい。遠隔操作モードが終了すると、ロボット100は目110を通常の黒目表示に戻し、遠隔操作モード開始時の所在地点に戻る。遠隔操作モードが終了したときには、ロボット100は座り込んでもよいし、首を激しく振るなどして、「支配から脱して自我を取り戻した様子」を行動表現してもよい。 The user can set the robot 100 to the remote control mode from the smartphone. The user can also terminate the remote control mode of the robot 100 from the smartphone. The robot 100 in the remote control mode may change the display of the eyes 110. For example, the robot 100 may change the eyes 110 to red eyes, or may display the icon on the eyes 110 to visually represent “being controlled (remotely operated)”. When the remote control mode ends, the robot 100 returns the eyes 110 to the normal black-eye display, and returns to the location point at the start of the remote control mode. When the remote control mode ends, the robot 100 may sit down, or may shake its head violently to express "a state in which it has regained its ego from control".
 遠隔モードのロボット100は感情パラメータや親密度を変化させない。 The remote mode robot 100 does not change emotional parameters or intimacy.
 遠隔操作モードに切り替わる際、ロボット100は、遠隔操作を要求した人の認証をおこなう。認証に成功した場合に限り、ロボット100は遠隔操作モードに切り替わる。認証は、アカウント名とパスワードによる一般的な認証方法や、遠隔操作に使用されるデバイスに予め電子証明書を登録しておき、電子証明書を有するデバイスからのアクセスだけを許可するとしてもよい。更に、スマートフォンなどの携帯端末に設けられているカメラやマイクなどを利用して、携帯端末を操作している人が、ロボット100のオーナーであることを確認することで認証してもよい。また、遠隔モードを要求されたときに、周囲にいるユーザに対して遠隔モードへの切り替えを承認してもらうことにしてもよい。このように、遠隔モードを要求した人が、ロボット100のオーナーであることを十分確認することで、第三者による意図しない遠隔操作を防止できる。 When switching to the remote control mode, the robot 100 authenticates the person who requested the remote control. Only if the authentication is successful, the robot 100 switches to the remote control mode. For the authentication, a general authentication method using an account name and a password, or an electronic certificate may be registered in advance in a device used for remote operation, and only access from a device having the electronic certificate may be permitted. Further, the person operating the mobile terminal may be authenticated by confirming that the person operating the mobile terminal is the owner of the robot 100, using a camera or a microphone provided in the mobile terminal such as a smartphone. Further, when the remote mode is requested, the users in the vicinity may be allowed to approve the switching to the remote mode. As described above, by sufficiently confirming that the person requesting the remote mode is the owner of the robot 100, it is possible to prevent unintended remote operation by a third party.
<高齢者の見守り>
 図15から図18は、複数のロボット100が高齢者を見守るときの行動シーンを説明するための模式図である。
 高齢の父親が一人暮らしをしている。一人娘は父親から離れたところで暮らしている。父親の家には2台のロボット100がいる(図15A)。
 自宅リビングでは娘はスマートフォンを見ている(図15B)。ロボット100たちは、父親との生活をライフログとして記録する。ライフログは、父親の周りではどんなことが起こっているかを父親のプライバシーに配慮したかたちで示す日記である。
 娘はスマートフォンでライフログをチェックする(図15C)。ライフログには、父親が何時に起きたか、朝食を食べたか、などの簡易な情報が含まれている。
 一方、父親はロボット100を抱っこして可愛がっている(図15D)。画像解析や音声解析や通信などを介して父親の許可を認識すれば、ロボット100は父親と遊んでいる姿の撮像画像を娘のスマートフォンに送信してもよい。たとえば、ロボット100Aが父親に抱っこされているときには、ロボット100Bがカメラマンとなってロボット100Aと父親を撮像し、その撮像画像を娘のスマートフォンに送信してもよい。
<Watching over the elderly>
15 to 18 are schematic diagrams for explaining action scenes when a plurality of robots 100 watch over an elderly person.
An elderly father lives alone. The only daughter lives apart from her father. There are two robots 100 in the father's house (Fig. 15A).
In the living room at home, her daughter is looking at her smartphone (Fig. 15B). The robots 100 record the life with their father as a life log. Lifelog is a diary that shows what is happening around his father in a way that respects his privacy.
The daughter checks the life log on the smartphone (Fig. 15C). The life log contains simple information such as when the father got up and had breakfast.
On the other hand, the father holds the robot 100 and loves him (FIG. 15D). Upon recognizing the father's permission through image analysis, voice analysis, communication, or the like, the robot 100 may send the captured image of the figure playing with the father to the daughter's smartphone. For example, when the robot 100A is held by the father, the robot 100B may serve as a cameraman to capture an image of the robot 100A and the father and transmit the captured image to the daughter's smartphone.
 娘は、リビングで父親がロボット100との生活を楽しんでいる様子を見て安心する(図16A)。 Daughter is relieved to see her father enjoying life with robot 100 in the living room (Fig. 16A).
 次に、娘がオフィスで働いている場面を想定する。娘はふとスマートフォンを取り出して父親のライフログをチェックする(図16B)。
 このライフログには父親についての記録がほとんどなかったとする。娘は父親のことが急に心配になる(図16C)。
 娘はオフィスの廊下から父親に電話をかける(図16D)。
Next, assume that the daughter is working in the office. Daughter suddenly takes out the smartphone and checks the father's life log (Fig. 16B).
Suppose the lifelog had few records about his father. The daughter suddenly becomes worried about her father (Fig. 16C).
The daughter calls the father from the office corridor (Fig. 16D).
 電話からはすぐに父親のご機嫌な声が聞こえてくる(図17A)。
 父親は旅館にいる。風呂上がりのときにちょうど電話がかかってきたらしい(図17B)。
 父親は、友達と温泉に来ていたと娘に伝える(図17C)。
 娘は父親が温泉に行くことを知らなかったので事情を知って安心する(図17D)。
Immediately from the phone, you can hear the humorous voice of your father (Fig. 17A).
My father is in the inn. It seems that he just received a call when he took a bath (Fig. 17B).
The father tells his daughter that he was in a hot spring with a friend (Fig. 17C).
The daughter did not know that her father would go to the hot spring, so she was relieved to know the circumstances (Fig. 17D).
 父と娘の会話が続く(図18A、図18B)。父親の自宅では2台のロボット100がお留守番をしている(図18C)。  The conversation between father and daughter continues (Figs. 18A and 18B). At the father's home, two robots 100 are out of office (Fig. 18C).
 ロボット100は、父親(見守り対象の高齢者)との生活で生じるさまざまなイベントをライフログとして記録する。このライフログには、起きた時間、いつもやっている体操を今日もしたか、など父親の日常における定例行動が記録される。娘は、ロボット100が提供するライフログを通して父親がいつもどおりの生活を送っているかを確認できる。 The robot 100 records various events that occur in life with a father (elderly person to be watched over) as a life log. This life log records the father's daily routines, such as when he got up and whether he did the gymnastics he always does today. The daughter can confirm whether the father is living as usual through the life log provided by the robot 100.
 父親とロボット100との関わりを示すイベントが所定時間以上発生しない(検出しない)ときには、ロボット100は娘のスマートフォンに対して異変通知を送信してもよい。たとえば、ロボット100が父親からしばらく触られていないとき、あるいは、お昼になっても父親が寝転んでいるときには異変通知を送信してもよい。父親が寝転んでいるかどうかは、画像解析や温度センサ解析などによって判定できる。ロボット100は、室内を積極的に動き回ることで父親の視認機会を増やすように行動してもよい。 When the event indicating the relationship between the father and the robot 100 does not occur (is not detected) for a predetermined time or longer, the robot 100 may send the incident notification to the daughter's smartphone. For example, the incident notification may be transmitted when the robot 100 has not been touched by the father for a while, or when the father is lying down at noon. Whether or not the father is lying can be determined by image analysis or temperature sensor analysis. The robot 100 may act so as to increase the viewing opportunities of the father by actively moving around in the room.
 情報を抽象化したライフログにより、父親のプライバシーを守りながら、娘は父親の日常を確認できる。高齢者を見守りの対象とする場合には、ロボット100は常に高齢者を視認する必要はない。高齢者は自立した生活を送っており、ロボット100も基本的には自律行動をすればよい。高齢者とロボット100は適度な距離感を保つほうが好ましいと考えられる。見守り時に限らず、ロボット100はユーザが希望する場合には常にライフログを記録してもよい。ロボット100は高齢者の生活に異変が生じたときだけ娘に異変通知をすればよい。 A life log that abstracts information allows the daughter to check the father's daily life while protecting the father's privacy. The robot 100 does not always need to visually recognize the elderly person when the elderly person is to be watched over. The elderly person lives an independent life, and the robot 100 basically only needs to take an autonomous action. It is considered preferable for the elderly person and the robot 100 to maintain an appropriate sense of distance. The robot 100 may always record a life log when the user desires, not limited to watching. The robot 100 may notify the daughter of the abnormality only when an abnormality occurs in the life of the elderly person.
<嫉妬心の表現>
 人は自分に向けられる愛情に対して無関心ではいられない。複数の人が同一の人に愛情を向けるとき、そこには嫉妬心が芽生えやすい。そこで、ロボット100Aとロボット100Bがユーザと暮らすとき、一方のロボット100は他方のロボット100に対する嫉妬心をユーザに感じさせる行動をとってもよい。
<Expression of jealousy>
One cannot be indifferent to the affection towards oneself. When more than one person turns to the same person, jealousy can easily develop there. Therefore, when the robot 100A and the robot 100B live with the user, one robot 100 may take an action to make the user feel jealous of the other robot 100.
 たとえば、ロボット100Bがユーザに抱っこされているとき、状態管理部244はロボット100Aの感情パラメータの一種である承認欲求値(認められたいという欲求)を上昇させる。ロボット100Aの動作制御部150は、承認欲求値が高まると、ユーザに対して抱っこをせがむ。ロボット100Aはユーザをじっと見つめてもよいし、ユーザに近づいてもよいし、ユーザの周辺を徘徊することで抱っこを求めてもよい。ユーザが歩くと、ロボット100Aはユーザに追従して移動してもよい。承認欲求値の高まりが、あたかも嫉妬心をかきたてられたかのようなロボット100の行動特性として外的に表現される。 For example, when the robot 100B is held by the user, the state management unit 244 raises the approval desire value (the desire to be acknowledged), which is a kind of emotion parameter of the robot 100A. The operation control unit 150 of the robot 100</b>A holds the user in a hug as the approval desire value increases. The robot 100A may stare at the user, approach the user, or wander around the user to ask for a hug. When the user walks, the robot 100A may move following the user. The increase in the approval desire value is externally expressed as a behavioral characteristic of the robot 100 as if it was envious.
 ユーザがロボット100Bを抱っこし続けるときには、ロボット100Aはユーザにまとわりつくことで「強い嫉妬心」を積極的に表現してもよい。あるいは、ロボット100Aはユーザから離れた位置に移動して遠くから視線を向けることで消極的に嫉妬心を表現してもよい。こうした嫉妬心の表現態様は、ロボット100ごとの個性(初期設定された個性、あるいは、育成された個性)に応じて決められる。「拗ねる」ことで嫉妬心を表現してもよく、嫉妬を抱くイベントが発生してから一定の期間は、ユーザが近づいても離れるような行動表現をしてもよい。ロボット100は、一時的に抱っこを拒否することで「拗ねる」様子を行動表現してもよい。 When the user continues to hold the robot 100B, the robot 100A may positively express "strong jealousy" by clinging to the user. Alternatively, the robot 100A may passively express jealousy by moving to a position away from the user and directing the line of sight from a distance. The expression mode of such jealousy is determined according to the individuality (initialized individuality or nurtured individuality) of each robot 100. Jealousy may be expressed by "sneaking", and an action expression may be expressed such that the user approaches or leaves for a certain period after the event that causes jealousy occurs. The robot 100 may express the behavior of “being tired” by temporarily rejecting the hug.
 ロボット100は、親密度が高いときほど、嫉妬心を感じさせる行動をとるとしてもよい。たとえば、ロボット100AがユーザP1に対して高い親密度をもち、ユーザP2に対しては比較的低い親密度を有するとする。このとき、ユーザP1がロボット100Bを抱っこしたときには、ユーザP2がロボット100Bを抱っこしたときよりも承認欲求値を高めてもよい。このような制御方法によれば、自分が特に好きなユーザの愛情を独占したい、という独占欲をもっているかのような行動表現が可能となる。 The robot 100 may behave more like jealousy when the degree of intimacy is higher. For example, it is assumed that the robot 100A has a high degree of intimacy with the user P1 and has a relatively low degree of intimacy with the user P2. At this time, when the user P1 holds the robot 100B, the approval desire value may be higher than when the user P2 holds the robot 100B. According to such a control method, it becomes possible to express an action as if the user has a monopoly desire to monopolize the love of a user he particularly likes.
 ロボット100は、自身の状態(感情パラメータ、親密度、イベントなど)を他のロボット100に通知してもよい(以下、このような通知を「状態通知」という)。状態通知に基づいて、ロボット100はお互いの状態を把握し合うことができてもよい。たとえば、ロボット100Aが「ユーザに抱っこされている」「ユーザに撫でてもらっている」「ユーザに着替えさせてもらった」などの状態をロボット100Bに状態通知することにより、ロボット100Bはロボット100Aの状態を把握できる。ロボット100Aが抱っこされるなどのイベントにより承認要求値(認められたい欲)が低下する一方、ロボット100Bの承認要求値が閾値以上の高い状態にあるとき、ロボット100Bは嫉妬心を表現する特有の行動特性を示す。 The robot 100 may notify other robots 100 of its own status (emotion parameter, intimacy, event, etc.) (hereinafter, such notification is referred to as “status notification”). The robots 100 may be able to recognize each other's states based on the state notification. For example, the robot 100B notifies the robot 100B of a state such that the robot 100A is held by the user, stroked by the user, or changed by the user, so that the robot 100B can change the state of the robot 100A. Can be grasped. While the approval request value (desire to be acknowledged) decreases due to an event such as the robot 100A being held, when the approval request value of the robot 100B is higher than a threshold value, the robot 100B is unique in expressing jealousy. Show behavioral characteristics.
 本実施形態においては、サーバ200の状態管理部244が各ロボット100の感情パラメータをまとめて管理する。この場合には、状態管理部244は、ロボット100Aの感情パラメータの値を内部的にロボット100Bに通知してもよいし、ロボット100Aの感情パラメータに基づいてロボット100Bの感情パラメータを変化させてもよい。この場合には、ロボット100Aがロボット100Bから視認可能な位置にいることを条件としてロボット100Bの感情パラメータを変化させるとしてもよい。ロボット100Aの近くにいるロボット100Bが、ロボット100Aの感情の変化を視覚的に感じ取って自らの感情パラメータを変化させる様子を表現させるためである。 In this embodiment, the state management unit 244 of the server 200 collectively manages emotional parameters of each robot 100. In this case, the state management unit 244 may internally notify the robot 100B of the emotion parameter value of the robot 100A, or may change the emotion parameter of the robot 100B based on the emotion parameter of the robot 100A. Good. In this case, the emotion parameter of the robot 100B may be changed on the condition that the robot 100A is in a position visible from the robot 100B. This is because the robot 100B near the robot 100A expresses a state in which the emotional change of the robot 100A is visually sensed and the emotional parameter of the robot 100B is changed.
 感情パラメータに限らず、ロボット100Aは赤外線などの近距離無線通信によりロボット100Bに通知してもよい。この場合には、ロボット100Bはロボット100Aのそばにいて、かつ視線を遮る障害物がないときしかロボット100Aの状態通知を受けることができないので、「視認できるほど近くにいるときだけ状態を感じ取れる」という様子を表現できる。なお、ロボット100Bは、撮像画像により、ロボット100Aが抱っこされている、撫でられているなどのイベントを検出してもよい。ロボット100Bは、ロボット100Aに対する快行為を画像認識したとき、感情パラメータを変化させてもよい。 Not limited to emotion parameters, the robot 100A may notify the robot 100B by short-range wireless communication such as infrared rays. In this case, the robot 100B can receive the state notification of the robot 100A only when the robot 100B is near the robot 100A and there is no obstacle blocking the line of sight. Therefore, "the state can be sensed only when the robot 100B is close enough to be visually recognized." You can express the situation. In addition, the robot 100B may detect an event such as the robot 100A being hugged or stroked by the captured image. The robot 100B may change the emotion parameter when the pleasing action with respect to the robot 100A is image-recognized.
 ロボット100は、別のロボット100に嫉妬するだけでなく、ペットや子どもに嫉妬してもよい。たとえば、ユーザが猫を抱っこしたときにも、ロボット100の承認欲求値を高めてもよい。ユーザも、ロボット100に嫉妬させないためにロボット100の見ていないところでペットを可愛がる、あるいは、ペットとロボット100をまんべんなく可愛がる、などの気遣いをする必要が生じるかもしれない。ユーザがロボット100の気持ちを考える機会を積極的に作り出すことにより、ユーザのロボット100に対する愛着を深めることができる。 The robot 100 may not only be jealous of another robot 100, but also jealous of a pet or a child. For example, the approval demand value of the robot 100 may be increased even when the user holds a cat. The user may also need to care for the pet in a place where the robot 100 is not looking, or to love the pet and the robot 100 evenly in order not to make the robot 100 jealous. By positively creating an opportunity for the user to think about the feeling of the robot 100, the user's attachment to the robot 100 can be deepened.
 本実施形態のロボット100は、会話をしなくても、その行動によってロボット100の気持ちを表現できる。ロボット100Aは、ロボット100Bの気持ち(感情パラメータ)を受信できてもよい。サーバ200は、ロボット100Bの感情パラメータの変化をロボット100Aの行動に反映させてもよい。たとえば、ロボット100Bの承認欲求値が急低下したときには(ロボット100Bに何かいいことがあったと考えられるときには)、ロボット100Aはロボット100Bの近くに移動するとしてもよい。ロボット100Aは、ロボット100Bの承認欲求が満たされていることを通知されたとき、自らの承認欲求値(自分も認められたいという気持ち)を高めてもよい。このような制御方法によれば、ユーザがこっそりとロボット100Bをかわいがったときでも、ロボット100Aが何かを感づいたかのような行動表現を実現できる。いわば、ロボット100同士でテレパシーが通じ合っているかのような不思議な行動表現を実現できる。 The robot 100 of the present embodiment can express the feelings of the robot 100 by its action without having a conversation. The robot 100A may be able to receive the feeling (emotion parameter) of the robot 100B. The server 200 may reflect the change in the emotion parameter of the robot 100B on the action of the robot 100A. For example, the robot 100A may move closer to the robot 100B when the approval desire value of the robot 100B suddenly drops (when it is considered that something has happened to the robot 100B). When the robot 100A is notified that the approval desire of the robot 100B is satisfied, the robot 100A may increase its own approval desire value (the feeling that the user wants to be acknowledged). According to such a control method, even when the user secretly pets the robot 100B, it is possible to realize an action expression as if the robot 100A felt something. In other words, it is possible to realize a mysterious action expression as if the robots 100 communicate with each other through telepathy.
<複数のロボットが同じものを見つめる>
 ロボット100Aとロボット100Bは同一対象を継続して見つめ続けてもよい。たとえば、ロボット100Aがくつろいでいるユーザを見つめるとき、ロボット100Bも同一のユーザを見つめてもよい。ロボット100Bは、ロボット100Aとの通信を介して、または画像解析を介して、ロボット100Aがくつろいでいるユーザを見つめていること(ロボット100Aの頭部の向きがユーザの存在方向であること)を検出してもよい。ロボットBは、ロボットAの近くに移動するなどしてからユーザを見つめてもよい。ユーザは複数の視線を感じるため、ロボット100たちが自分に強い興味をもっていると感じることができる。ユーザがロボット100たちを長時間にわたってかまっていないとき、ロボット100Aとロボット100Bはユーザを同時に見つめることで「かかわり」を無言で求めてもよい。
<Multiple robots staring at the same thing>
The robot 100A and the robot 100B may continuously look at the same object. For example, when the robot 100A looks at a relaxing user, the robot 100B may also look at the same user. The robot 100B is gazing at the user in which the robot 100A is relaxing (communication with the robot 100A or image analysis) (that the head of the robot 100A is in the user's presence direction). It may be detected. The robot B may stare at the user after moving near the robot A. Since the user feels a plurality of eyes, the robots 100 can feel that they have a strong interest in themselves. When the user has not bitten the robots 100 for a long time, the robot 100A and the robot 100B may silently seek the “relationship” by staring at the users at the same time.
 ロボット100AはユーザP1に所定値以上の高い親密度を有し、ロボット100BもユーザP1に対して所定値以上の高い親密度を有するとする。ロボット100は、親密度の高いユーザほど見つめる機会が多い。このため、上記状況においては、ロボット100Aとロボット100Bがはからずも同時にユーザP1を見つめる機会が生じる。ロボット100Aは、ユーザP1を見つめていることをロボット100Bに状態通知してもよい。ロボット100Bは、自身がユーザP1を見つめているときにロボット100Aから「(ロボット100Aも)ユーザP1を見つめている」という状態通知を受信したときには、驚くモーションや、ロボット100Aの方向に視線を向けるなどの特定のモーションを実行することで「偶然の一致」を演出してもよい。また、お互いが同じユーザを見つめている場合、ロボット100Aとロボット100Bとがお互いに近づいて、並んでユーザP1を見つめるようにモーションを実行してもよい。ユーザの顔ができるだけ大きく見える位置に、双方のロボット100が移動して並んでユーザをみつめることで、ユーザに強いプレッシャーを与えることができる。 It is assumed that the robot 100A has a high degree of intimacy with the user P1 that is equal to or higher than a predetermined value, and the robot 100B has a high degree of familiarity with the user P1 that is equal to or higher than the predetermined value. The more intimate the user of the robot 100, the more chances it is to gaze. Therefore, in the above situation, the robot 100A and the robot 100B have an opportunity to gaze at the user P1 at the same time without hesitation. The robot 100A may notify the robot 100B that it is gazing at the user P1. When the robot 100B receives the state notification "looking at the user P1 (also the robot 100A)" from the robot 100A while gazing at the user P1, the robot 100B turns its gaze toward the surprising motion or the direction of the robot 100A. The "accidental coincidence" may be produced by executing a specific motion such as. Further, when the two are gazing at the same user, the motion may be performed so that the robot 100A and the robot 100B approach each other and gaze at the user P1 side by side. When both robots 100 move to the position where the user's face looks as large as possible and stare at the user side by side, strong pressure can be applied to the user.
 また、家の中に虫が入り込んだときには(画像解析または音声解析などによって家の中で虫を検出したときには)、ロボット100Aとロボット100Bは、対象の虫を共有し、ロボット100Aとロボット100Bは同時に虫を見つめることで、虫に対する異常な興味を行動表現してもよい。更に、ロボット100Aとロボット100Bは互いを見つめることにより、ロボット100たちの間で何かを示し合わせているかのような行動表現を実現できる。 In addition, when the insect enters the house (when the insect is detected in the house by image analysis or voice analysis), the robot 100A and the robot 100B share the target insect, and the robot 100A and the robot 100B share the target insect. At the same time, an abnormal interest in insects may be expressed by looking at the insects. Furthermore, the robot 100A and the robot 100B can realize an action expression as if the robots 100 are showing each other by gazing at each other.
 ロボット100Aの好奇心を示す感情パラメータの値が閾値を超えたとき、ロボット100Aは「好奇心が高まっている」という状態をロボット100Bに状態通知してもよい。このとき、ロボット100Bはロボット100Aに近づき、手を動かしてロボット100Aに触れるなど、ロボット100Aの好奇心のもとを知りたがるかのようなモーションを実行してもよい。ロボット100Aは、全天周画像のうち興味の対象となっている対象物とその方向をロボット100Bに通知してもよい。ロボット100Bはこの通知をうけたとき、ロボット100Bは、ロボット100Aの対象物と同一の対象物に頭や視線を向けることで、同一の対象物を見つめてもよい。 When the value of the emotion parameter indicating the curiosity of the robot 100A exceeds a threshold value, the robot 100A may notify the robot 100B of a state of "increasing curiosity". At this time, the robot 100B may move closer to the robot 100A, move the hand to touch the robot 100A, and perform a motion as if the robot 100A wants to know the source of curiosity. The robot 100A may notify the robot 100B of the target object of interest in the whole sky image and the direction thereof. When the robot 100B receives this notification, the robot 100B may look at the same object by directing its head or line of sight to the same object as the object of the robot 100A.
<アピール>
 ロボット100は、所定のアピール条件が成立したとき、たとえば、承認欲求値が閾値を超えたときにはユーザに対して強いアピール行動を実行する。ここでいうアピール行動とは、タッチ、声掛け、抱っこなど、ユーザからロボット100へのかかわりを積極的に求める行動である。たとえば、ユーザが、屋内でヨガなどのエクセサイズをしているとする。ユーザがヨガに夢中になっているときに、ロボット100のアピール条件が成立したときには、ロボット100はユーザを見つめ続ける、ユーザの周りを徘徊行動するなどのアピール行動を実行し、ヨガの中断を求めてもよい。
<Appeal>
The robot 100 executes a strong appeal action to the user when a predetermined appeal condition is satisfied, for example, when the approval desire value exceeds a threshold value. Here, the appeal action is an action such as a touch, a voice call, a hug, or the like, which actively seeks the user's involvement with the robot 100. For example, suppose the user is doing exercises such as yoga indoors. When the appealing condition of the robot 100 is satisfied while the user is absorbed in yoga, the robot 100 performs appealing actions such as continuing to gaze at the user and wandering around the user, and requests the interruption of yoga. May be.
<追従行動>
 上述したように、ロボット100Aが移動するとき、ロボット100Bはロボット100Aの後ろから、ロボット100Aとの距離を一定に保ちつつ追従移動してもよい。ロボット100Aは、ユーザやペットに対して同様に追従移動してもよい。たとえば、ユーザの後ろを犬が追従しているとき、ロボット100Aは犬またはユーザを追従するとしてもよい。ロボット100Bは、ロボット100Aが犬などに追従しているときに、ロボット100Aに追従するとしてもよい。ロボットと追従対象(例えばユーザを追従する犬)との距離は、追従対象と追従対象の追従対象(例えばユーザ)との距離と等しくてもよいし、当該距離より所定の長さだけ短くてもよいし、当該距離より所定の長さだけ長くてもよい。
<Following action>
As described above, when the robot 100A moves, the robot 100B may follow and move from behind the robot 100A while keeping a constant distance from the robot 100A. The robot 100A may similarly follow and move with respect to the user or the pet. For example, when the dog is following the user, the robot 100A may follow the dog or the user. The robot 100B may follow the robot 100A when the robot 100A is following a dog or the like. The distance between the robot and the tracking target (for example, a dog tracking the user) may be equal to the distance between the tracking target and the tracking target of the tracking target (for example, the user), or may be shorter than the distance by a predetermined length. Alternatively, the distance may be longer than the distance by a predetermined length.
 ロボット100は、移動物体Q1と移動物体Q2が同一方向に所定時間以上移動していることを検出したとき、「追従」が発生していると判定する。このような制御方法によれば、追従が発生していると、自分もついていきたくなるというロボット100の本能を感じさせる行動表現が可能となる。複数のロボット100が追従行動する姿は、ユーザに対してロボット100の愛らしさをアピールする上で有効であると考えられる。 When the robot 100 detects that the moving object Q1 and the moving object Q2 are moving in the same direction for a predetermined time or longer, it determines that "following" has occurred. According to such a control method, it becomes possible to express an action that makes the robot 100 feel the instinct that the user wants to keep up with the tracking when the tracking occurs. It is considered that the appearance in which the plurality of robots 100 follow each other is effective in appealing the cuteness of the robot 100 to the user.
 ロボット100の感情パラメータ、たとえば、好奇心を示す感情パラメータの値が閾値以下となっていることを条件としてロボット100は追従行動を実行するとしてもよい。このような制御方法によれば好奇心が薄れて退屈しているときには他のロボットに対する追従行動を実行し、好奇心が高まっているときには追従行動を実行しないという行動表現が可能となる。追従行動を実行したときには、各種のイベントによって好奇心が閾値以上に高まったことを条件として追従行動を終了するとしてもよい。自律行動型ロボットの行動の源は、内部状態を示す所定のパラメータである。本実施形態では、好奇心を示すパラメータが行動の源に大きく貢献するが、外部環境に変化が乏しければ好奇心パラメータが0に近づく可能性がある。こうした場合に、自身のパラメータの変化を待つのではなく、他のロボットの行動に便乗することで、自身のパラメータを積極的に変化させることができる。 The robot 100 may perform a follow-up action on condition that the emotion parameter of the robot 100, for example, the value of the emotion parameter indicating curiosity is equal to or less than a threshold value. According to such a control method, it becomes possible to express the behavior that the follow-up action to another robot is executed when the curiosity is weakened and the user is bored, and the follow-up action is not executed when the curiosity is increased. When the follow-up action is executed, the follow-up action may be ended on the condition that the curiosity is increased to a threshold value or more due to various events. The source of the action of the autonomous action type robot is a predetermined parameter indicating the internal state. In the present embodiment, the parameter indicating curiosity greatly contributes to the source of behavior, but the curiosity parameter may approach 0 if the external environment has little change. In such a case, it is possible to positively change the own parameter by piggybacking on the action of another robot instead of waiting for the change of the own parameter.
 上述したように、複数のロボット100は、追従行動のように同一行動をとることもあれば、互いの行動によって影響を受けながら行動特性を変化させる。複数のロボット100の協調性・連動性を高めるため、サーバ200内において、あるいは、ロボット100同士で互いの状態を把握し合えることが望ましい。ロボット100Aの状態を把握したロボット100Bは、ロボット100Aの状態に同調して行動してもよいし、同調することなく独自行動をしてもよい。ロボット100Bがロボット100Aに同調した行動の例は、ロボットAが見つめているものと同じものをロボット100Bが見つめるなどの、ロボット100Aの対象物と同一の対象物に対して、ロボットBが同一カテゴリのモーションを実行することである。 As described above, the plurality of robots 100 may take the same action such as the following action, or change the action characteristics while being influenced by the actions of each other. In order to enhance the cooperation and interlocking of the plurality of robots 100, it is desirable that the states of the robots 100 can be understood in the server 200 or between the robots 100. The robot 100B that has grasped the state of the robot 100A may act in synchronization with the state of the robot 100A, or may act independently without synchronizing. An example of an action in which the robot 100B is synchronized with the robot 100A is that the robot B looks at the same thing as the robot A is gazing at, and the robot B looks at the same object as the object of the robot 100A. Is to execute the motion of.
 複数のロボット100が協調行動をとるとき、ユーザはロボット100たちを可愛らしく感じると考えられる。ユーザは、ロボット100たちが協調行動する様子を写真に撮りたいと考えるかもしれない。全天周カメラ113によりユーザがカメラを構えていると画像認識したとき、ロボット100は、ユーザが撮影を終えるまで自己の行動及び状態の少なくとも一方を維持してもよい。また、この場合、行動又は状態の維持に代えて、ロボット100は、特定のモーションを選択するとしてもよい。たとえば、ロボット100は、ユーザのいる方向に体を向けてもよいし、協調行動を一時的に停止することで、ユーザの撮影に協力してもよい。このように、ロボット100は、撮影行動を検知したとき、一時的に動作を停止してもよい。また、ロボット100は、協調行動のときに限らず、ユーザの撮影行動を検出したときにはポーズをとったり、一時的に行動停止するとしてもよい。このような制御方法によれば、ユーザはSNS(Social Networking Service)などに、ロボット100の可愛い姿を撮った撮影画像をアップロードしやすくなる。また、家庭内にあるさまざまなもの(ペットや、子ども、おもちゃや家具など)と関わるロボット100についてさまざまなベストショット画像を撮影しやすくなると考えられる。 When a plurality of robots 100 take a coordinated action, it is considered that the user feels the robots 100 cute. The user may want to take a picture of how the robots 100 cooperate. When the omnidirectional camera 113 image-recognizes that the user is holding the camera, the robot 100 may maintain at least one of its own behavior and state until the user finishes photographing. In this case, instead of maintaining the action or state, the robot 100 may select a specific motion. For example, the robot 100 may turn its body in the direction in which the user is present, or may temporarily stop the cooperative action to cooperate with the user in photographing. In this way, the robot 100 may temporarily stop the operation when detecting the shooting action. Further, the robot 100 may take a pose or temporarily stop the action when the user's shooting action is detected, not limited to the coordinated action. According to such a control method, the user can easily upload a captured image of the cute appearance of the robot 100 to an SNS (Social Networking Service) or the like. Further, it is considered that various best shot images of the robot 100 related to various things (pets, children, toys, furniture, etc.) in the home can be easily taken.
<外皮の構造>
 ロボット100の外皮314は、伸縮性を有する基材を布の袋に収容することで構成される。袋は、ユーザに対して温かみのある手触りのよい柔軟素材であればよい。基材は、難燃性の素材であることが好ましく、高温化したときに自己消火性ガスを放出する素材であることが更に好ましい。たとえば、基材は難燃性スポンジにより構成される。外皮314は、難燃性の基材を布の袋で包み込むように形成されるので、布袋が着火しても基材から自己消火性ガスが放出されるため、布の袋の延焼を防ぐことができる。基材が自己消火性ガスを発生させるときの閾値温度は、布の着火温度よりも低いことが好ましい。この場合には、布が高温化したとき、布に着火する前に自己消火性ガスが発生するため、布の発火を防止できる。外皮314を難燃性の基材と柔軟な袋の二重構成にすることにより、ロボット100の手触りの温かみと、高温に対する安全性を両立させることができる。
<Structure of outer skin>
The outer cover 314 of the robot 100 is configured by accommodating a stretchable base material in a cloth bag. The bag may be made of a flexible material that is warm and comfortable to the user. The base material is preferably a flame-retardant material, and more preferably a material that releases a self-extinguishing gas when the temperature rises. For example, the substrate is composed of flame retardant sponge. Since the outer cover 314 is formed so as to wrap a flame-retardant base material in a cloth bag, self-extinguishing gas is released from the base material even when the cloth bag is ignited, so that the spread of the cloth bag is prevented. You can The threshold temperature when the substrate generates the self-extinguishing gas is preferably lower than the ignition temperature of the cloth. In this case, when the cloth becomes hot, self-extinguishing gas is generated before the cloth is ignited, so that the cloth can be prevented from igniting. By forming the outer cover 314 in a dual structure of a flame-retardant base material and a flexible bag, the warmth of the touch of the robot 100 and the safety against high temperature can both be achieved.
 なお、本発明は上記実施形態や変形例に限定されるものではなく、要旨を逸脱しない範囲で構成要素を変形して具体化することができる。上記実施形態や変形例に開示されている複数の構成要素を適宜組み合わせることにより種々の発明を形成してもよい。また、上記実施形態や変形例に示される全構成要素からいくつかの構成要素を削除してもよい。 It should be noted that the present invention is not limited to the above-described embodiments and modified examples, and constituent elements can be modified and embodied without departing from the scope of the invention. Various inventions may be formed by appropriately combining a plurality of constituent elements disclosed in the above-described embodiments and modifications. In addition, some constituent elements may be deleted from all the constituent elements shown in the above-described embodiments and modifications.
 1以上のロボット100と1つのサーバ200によりロボットシステム300が構成されるとして説明したが、ロボット100の機能の一部はサーバ200により実現されてもよいし、サーバ200の機能の一部または全部がロボット100に割り当てられてもよい。1つのサーバ200が複数のロボット100をコントロールしてもよいし、複数のサーバ200が協働して1以上のロボット100をコントロールしてもよい。 Although the robot system 300 has been described as including one or more robots 100 and one server 200, some of the functions of the robot 100 may be realized by the server 200, or some or all of the functions of the server 200. May be assigned to the robot 100. One server 200 may control a plurality of robots 100, or a plurality of servers 200 may cooperate to control one or more robots 100.
 ロボット100やサーバ200以外の第3の装置が、機能の一部を担ってもよい。図4において説明したロボット100の各機能とサーバ200の各機能の集合体は大局的には1つの「ロボット」として把握することも可能である。1つまたは複数のハードウェアに対して、本発明を実現するために必要な複数の機能をどのように配分するかは、各ハードウェアの処理能力やロボットシステム300に求められる仕様等に鑑みて決定されればよい。 A third device other than the robot 100 and the server 200 may take part of the function. The aggregate of the functions of the robot 100 and the functions of the server 200 described with reference to FIG. 4 can be grasped as one “robot” in a broad sense. How to distribute the plurality of functions required to implement the present invention to one or more pieces of hardware is determined in consideration of the processing capability of each piece of hardware and the specifications required for the robot system 300. It may be decided.
 上述したように、「狭義におけるロボット」とはサーバ200を含まないロボット100のことであるが、「広義におけるロボット」はロボットシステム300のことである。サーバ200の機能の多くは、将来的にはロボット100に統合されていく可能性も考えられる。 As described above, the “robot in the narrow sense” means the robot 100 that does not include the server 200, but the “robot in the broad sense” means the robot system 300. Many of the functions of the server 200 may possibly be integrated into the robot 100 in the future.
 この出願は、××××年×月×日に出願された日本出願特願××××-××××××号を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority on the basis of Japanese Patent Application No. XX××-×××××× filed on the date of xx××year×month×, and the entire disclosure thereof is here. take in.

Claims (11)

  1.  ロボットのモーションを選択する動作制御部と、
     前記動作制御部により選択されたモーションを実行する駆動機構と、
     対象者が所定の見守り条件を満たすか否かを判定する認識部と、
     前記見守り条件が成立したとき、前記対象者の見守りモードに設定するモード設定部を備えるロボット。
    A motion control unit that selects the motion of the robot,
    A drive mechanism that executes a motion selected by the operation control unit;
    A recognition unit that determines whether the target person satisfies a predetermined watching condition,
    A robot comprising a mode setting unit that sets the watching mode of the target person when the watching condition is satisfied.
  2.  頭部を備え、
     前記駆動機構は、前記見守りモード中に、前記対象者が存在する方向に前記頭部を向け続けるモーションを実行することを特徴とする請求項1記載のロボット。
    Equipped with a head,
    The robot according to claim 1, wherein the drive mechanism executes a motion of continuously directing the head in a direction in which the target person is present during the watching mode.
  3.  前記見守りモードにおいて、他のロボットと前記対象者の位置を共有し、前記対象者に基づいて定められる前記他のロボットと同一の注視点または前記他のロボットの注視点と所定の距離内の注視点に向けて、前記頭部を向ける請求項2記載のロボット。 In the watching mode, the robot shares the position of the target person with another robot, and the same gazing point as the other robot determined based on the target person or a gazing point within a predetermined distance from the gazing point of the other robot. The robot according to claim 2, wherein the head is turned toward the viewpoint.
  4.  前記見守りモードにおいて、少なくとも他のロボット及び自己の一方が前記対象者に前記頭部を向けるように構成されている請求項2又は3記載のロボット。  The robot according to claim 2 or 3, wherein in the watching mode, at least one of the other robot and the self is configured to direct the head toward the target person. ‥
  5.  前記見守りモード中に、前記対象物との距離を所定距離以内に制御する請求項1~4のいずれか1項記載のロボット。 The robot according to any one of claims 1 to 4, wherein a distance to the object is controlled within a predetermined distance during the watching mode.
  6.  前記見守りモード中に、前記見守り条件が満たされないときの少なくとも1つのモードよりも前記機動機構の動作量を低下させる請求項1~5のいずれか1項記載のロボット。 The robot according to any one of claims 1 to 5, wherein during the watching mode, the operation amount of the mobile mechanism is made lower than that in at least one mode when the watching condition is not satisfied.
  7.  前記見守りモード中に、前記対象物が所定の条件を充足したと判定したときに、前記対象物に対するモーションを実行することを特徴とする請求項1~6のいずれか1項記載のロボット。 The robot according to any one of claims 1 to 6, characterized in that, when it is determined that the target object satisfies a predetermined condition during the watching mode, a motion for the target object is executed.
  8.  前記見守りモード中に、前記対象物が所定の条件を充足したと判定したときに、前記対象物に対するモーションを実行することを特徴とする請求項1~7のいずれか1項記載のロボット。 The robot according to any one of claims 1 to 7, wherein a motion is performed on the target object when it is determined that the target object satisfies a predetermined condition during the watching mode.
  9.  前記モード設定部は、前記見守り条件に加え、所定の年齢以上の人物が周囲に検出されないという条件が充足された場合に前記見守りモードに設定するように構成されていることを特徴とする請求項1~9のいずれか1項記載のロボット。 The mode setting unit is configured to set the watching mode when a condition that a person a predetermined age or older is not detected in the surroundings is satisfied in addition to the watching condition. The robot according to any one of 1 to 9.
  10.  前記モード設定部は、前記見守り条件に加え、前記対象者に関連付けられた人物が週に検出されないという条件が充足された場合に前記見守りモードに設定するように構成されていることを特徴とする請求項1~9のいずれか1項記載のロボット。 The mode setting unit is configured to set the watching mode when a condition that a person associated with the target person is not detected in a week is satisfied in addition to the watching condition. The robot according to any one of claims 1 to 9.
  11.  前記見守りモードにおいて、前記対象者の撮像画像を所定の通信端末に送信する通信部を備えることを特徴とする請求項1~10のいずれか1項記載のロボット。
     
    The robot according to any one of claims 1 to 10, further comprising a communication unit that transmits a captured image of the target person to a predetermined communication terminal in the watching mode.
PCT/JP2019/049463 2018-12-17 2019-12-17 Autonomous robot WO2020129993A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020561466A JPWO2020129993A1 (en) 2018-12-17 2019-12-17 robot
JP2023202284A JP2024055866A (en) 2018-12-17 2023-11-29 A robot that autonomously selects actions according to its internal state or external environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018235230 2018-12-17
JP2018-235230 2018-12-17

Publications (1)

Publication Number Publication Date
WO2020129993A1 true WO2020129993A1 (en) 2020-06-25

Family

ID=71101844

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/049463 WO2020129993A1 (en) 2018-12-17 2019-12-17 Autonomous robot

Country Status (2)

Country Link
JP (2) JPWO2020129993A1 (en)
WO (1) WO2020129993A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269946A (en) * 2021-03-22 2021-08-17 陇东学院 Security alarm device for community Internet of things rescue
CN116206779A (en) * 2023-04-28 2023-06-02 山东铭泰医疗设备集团有限公司 Wisdom ward interactive system based on visual perception

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001246550A (en) * 2000-03-02 2001-09-11 Ebara Corp Polishing device
JP2004185080A (en) * 2002-11-29 2004-07-02 Toshiba Corp Security system and mobile robot
JP2018094683A (en) * 2016-12-14 2018-06-21 株式会社メニコン Watching type pet robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001246580A (en) * 2000-03-03 2001-09-11 Sony Corp Information communication robot device, information communication method, and information communication robot system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001246550A (en) * 2000-03-02 2001-09-11 Ebara Corp Polishing device
JP2004185080A (en) * 2002-11-29 2004-07-02 Toshiba Corp Security system and mobile robot
JP2018094683A (en) * 2016-12-14 2018-06-21 株式会社メニコン Watching type pet robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269946A (en) * 2021-03-22 2021-08-17 陇东学院 Security alarm device for community Internet of things rescue
CN116206779A (en) * 2023-04-28 2023-06-02 山东铭泰医疗设备集团有限公司 Wisdom ward interactive system based on visual perception

Also Published As

Publication number Publication date
JP2024055866A (en) 2024-04-18
JPWO2020129993A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
JP7231924B2 (en) Autonomous action robot whose activity level is controlled
JP7320239B2 (en) A robot that recognizes the direction of a sound source
CN109475781B (en) Behavioral autonomous robot understanding physical contact
CN110024000B (en) Behavior autonomous robot for changing pupil
JP2024055866A (en) A robot that autonomously selects actions according to its internal state or external environment
CN117001687A (en) Robot and behavior autonomous robot
JP7236142B2 (en) Autonomous action robot
JP6517457B2 (en) Robot wearing a hull
CN109689174B (en) Robot for receiving visitor and server
JP6671577B2 (en) An autonomous robot that identifies people
JP7420387B2 (en) robot wearing costume
JP2020000279A (en) Autonomously acting type robot assuming virtual character
JP2019171566A (en) Robot having soft outer skin
JP6575005B2 (en) Joint structure suitable for robot joints
JP6734607B2 (en) Robots, portable items and robot control programs
JP7375770B2 (en) Information processing device, information processing method, and program
WO2019151387A1 (en) Autonomous behavior robot that behaves on basis of experience
JP7298861B2 (en) Autonomous robot that records daily life
Oei The Snow White's Character's Transformation In “Snow White And The Seven Dwarfs” And “Mirror Mirror” Seen From Feminist Theory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19898552

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020561466

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19898552

Country of ref document: EP

Kind code of ref document: A1