WO2020022371A1 - Robot, method for controlling robot, and control program - Google Patents

Robot, method for controlling robot, and control program Download PDF

Info

Publication number
WO2020022371A1
WO2020022371A1 PCT/JP2019/028975 JP2019028975W WO2020022371A1 WO 2020022371 A1 WO2020022371 A1 WO 2020022371A1 JP 2019028975 W JP2019028975 W JP 2019028975W WO 2020022371 A1 WO2020022371 A1 WO 2020022371A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
unit
notification
target person
notification target
Prior art date
Application number
PCT/JP2019/028975
Other languages
French (fr)
Japanese (ja)
Inventor
要 林
秀哉 南地
泰士 深谷
直人 吉岡
Original Assignee
Groove X株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Groove X株式会社 filed Critical Groove X株式会社
Priority to JP2020532436A priority Critical patent/JPWO2020022371A1/en
Publication of WO2020022371A1 publication Critical patent/WO2020022371A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means

Definitions

  • the present invention relates to a robot, a control method thereof, and a control program.
  • a robot that takes an image with a camera while moving autonomously in a house, recognizes an indoor space from the captured image, sets a movement route based on the recognized space, and moves indoors.
  • the setting of the robot movement route is performed by the user creating a map in advance that defines the route on which the robot moves.
  • the robot can move on a route determined based on the created map (for example, see Patent Literature 1).
  • the conventional robot may not be able to notify the target person of information when a specific target to be notified of information such as an alarm is not present near the robot.
  • the present invention has been made in view of the above circumstances, and in one embodiment, provides a robot capable of notifying a specific notification target person of information and a control method and a control program thereof.
  • a robot capable of notifying a specific notification target person of information and a control method and a control program thereof.
  • the robot according to the embodiment has an execution information acquisition unit that acquires execution information for executing an operation of informing information to be notified to a notification target person, and an execution information acquisition unit. And a notification operation executing unit that executes a notification operation based on the execution information for the notification target person searched for by the searching unit based on the execution information acquired in step (1).
  • the execution information acquisition unit acquires location information relating to a location designated by the user as execution information.
  • the execution information obtaining unit obtains, from the user terminal, location information specified by the user operating the map displayed on the user terminal operated by the user. .
  • the search unit further searches for a notification target person based on a captured image of the surrounding space.
  • the search unit searches for a notification target person by recognizing a person included in the captured image.
  • the robot according to the embodiment further includes a movement control unit that controls the movement mechanism, wherein the search unit calculates a movement path of the movement mechanism based on the execution information, and the movement control unit calculates the movement path.
  • the moving mechanism is controlled based on the moving path.
  • the search unit calculates the movement route further based on the restriction information for restricting the movement by the moving mechanism.
  • the robot further includes a marker recognizing unit that recognizes a predetermined marker included in a captured image of the surrounding space, and the movement control unit is configured to perform the operation based on the marker recognized by the marker recognizing unit. Control the moving mechanism.
  • the robot further includes a state information acquisition unit that acquires state information relating to the state of the notification target person searched by the search unit, and the notification operation execution unit is acquired by the state information acquisition unit.
  • the notification operation is changed according to the state of the notification.
  • the state information obtaining unit obtains a state of whether the notification target person is sleeping or waking up, and the notification operation execution unit determines the state as the notification operation.
  • the notification operation execution unit determines the state as the notification operation.
  • the execution information acquisition unit acquires execution information associated with the notification target person, and the notification operation execution unit executes the notification operation associated with the searched notification target person. I do.
  • the execution information acquisition unit acquires, as execution information, intimacy information indicating the level of intimacy between the notification target person and the robot, and the notification operation execution unit includes the execution information acquisition unit.
  • the informing operation is executed in cooperation with another robot.
  • the execution information acquisition unit acquires execution information associated with the plurality of notification targets, and the notification operation execution unit is associated with each of the notification targets.
  • the notification operation is executed in parallel.
  • the execution information acquisition unit acquires time information relating to the time for executing the notification operation as execution information, and the notification operation execution unit executes the notification operation based on the time information.
  • the robot according to the embodiment further includes a voice recognition unit that recognizes voice input to the microphone and converts the voice into language data, and the execution information acquisition unit specifies execution information based on the converted language data.
  • the notifying operation execution unit may determine that when the obtained execution information does not include time information related to a time for executing the notifying operation, the predetermined time elapses from the time when the execution information is obtained. At that point, the notification operation is executed.
  • the execution information acquisition unit sets the location of the microphone to which the voice is input as the location information.
  • the robot according to the embodiment further includes a photographing unit that photographs the notification target person after executing the notification operation, and a transmission unit that transmits the image data of the photographed notification target person to the user terminal.
  • the imaging unit changes the conditions for imaging the notification target person according to the intimacy between the notification target person and the robot.
  • a state information acquiring unit that acquires state information relating to the state of the person to be notified after executing the notifying operation, and a transmitting unit that transmits the acquired state information to the user terminal.
  • the robot includes a message receiving unit that receives a message addressed to a user of the robot, a notification target specifying unit that specifies a notification target by a destination of the received message, and a message.
  • a search unit that searches for a notification target person at a specified place or a place that is specified in correspondence with the notification target person, and the notification target person searched by the search unit is read out or instructed by a message.
  • the robot control method includes an execution information obtaining step of obtaining execution information for executing a notification operation of information to be performed on a notification target person in the robot; A search step of searching for a notification target person based on the execution information acquired in the execution information obtaining step; and a notification operation execution step of executing a notification operation based on the execution information for the notification target person searched in the search step.
  • the robot control program includes an execution information acquisition process for acquiring execution information for performing an operation of informing a notification target person of information to be executed by a robot; A search process for searching for a notification target person based on the execution information acquired in the execution information acquisition process, and a notification operation execution process for executing a notification operation based on the execution information for the notification target person searched for in the search process And execute.
  • the robot control method includes a message receiving step of receiving a message addressed to a user of the robot in the robot, and a notification target specifying the notification target by a destination of the received message. Step, a search step for searching for a notification target person at a place specified by the message or a place specified in correspondence with the notification target person, and, for the notification target person searched for in the search step, reading out a message or A notification step of performing a notification operation instructed by the message.
  • the robot control program includes a message receiving process for receiving a message addressed to a user of the robot, and a notification target person specifying a notification target person by a destination of the received message. Processing, at a place specified by the message or at a place specified in correspondence with the notification target person, a search process for searching for the notification target person, and for the notification target person searched for in the search process, the reading of the message or And a notification process of performing a notification operation specified by the message.
  • a robot, a control method thereof, and a control program obtain execution information for performing a notification operation of information to be performed to a notification target person, and perform notification based on the obtained execution information.
  • execution information for performing a notification operation of information to be performed to a notification target person, and perform notification based on the obtained execution information.
  • FIG. 2 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot according to the embodiment.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot according to the embodiment. It is a flowchart which shows the 1st example of operation
  • 6 is a flowchart illustrating a second example of the operation of the autonomous behavior robot control program according to the embodiment. It is a flowchart at the time of the autonomous behavior type robot control program in an embodiment performing a wake-up operation as a notification operation.
  • FIG. 4 is a diagram illustrating an example of execution information according to the embodiment.
  • FIG. 6 is a diagram illustrating an example of a method for setting execution information according to the embodiment.
  • FIG. 3 is a diagram illustrating an example of a module configuration of a data providing apparatus for specifying a location of a user terminal.
  • FIG. 3 is a diagram illustrating an example of a module configuration of a data providing apparatus for specifying a location of a robot.
  • FIG. 4 is a diagram illustrating an example of a module configuration of an autonomous behavior robot that provides a captured image and state information after executing a notification operation.
  • FIG. 11 is a diagram illustrating a first specific example of a notification operation in which two robots cooperate.
  • FIG. 14 is a diagram illustrating a second specific example of the notification operation in which two robots cooperate.
  • FIG. 14 is a diagram illustrating a third specific example of the notification operation in which two robots cooperate. It is a flowchart which shows the alerting
  • FIG. 1 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot 1 according to the embodiment.
  • the autonomous behavior robot 1 has a data providing device 10 and a robot 2.
  • the data providing device 10 and the robot 2 are connected by communication and function as the autonomous behavior robot 1.
  • the robot 2 includes a photographing unit 21, a marker recognizing unit 22, a movement control unit 23, a state information acquisition unit 24, a search unit 25, a notification operation execution unit 26, a notification unit 27, and a movement mechanism 29 having respective functional units. It is.
  • the data providing apparatus 10 has functional units of a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, and a second communication control unit 16.
  • the first communication control unit 11 has functional units of a captured image acquisition unit 111, a spatial data providing unit 112, and an instruction unit 113.
  • the second communication control unit 16 has functional units of a visualization data providing unit 161, a designation acquisition unit 162, and an execution information acquisition unit 163.
  • the above-described functional units of the data providing device 10 of the autonomous behavior robot 1 according to the present embodiment will be described as being functional modules realized by a data providing program (software) that controls the data providing device 10.
  • each function unit of the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 of the robot 2 is controlled by a program that controls the robot 2 in the autonomous behavior type robot 1. The description will be made assuming that the function module is implemented.
  • the functions of the autonomous behavior robot 1 can be added by adding, deleting, or changing (adding, etc.) functional modules.
  • the basic functions of the autonomous behavior robot 1 will be described as “basic functions”, and the additional functions of the autonomous behavior robot 1 will be described as “additional functions”.
  • the (visualization data providing unit 161, the designation acquisition unit 162), and the imaging unit 21, the marker recognition unit 22, and the movement control unit 23 of the robot 2 will be described as “basic functions”.
  • the execution information acquisition unit 163 in the data providing device 10 and the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 in the robot 2 will be described as “additional functions”.
  • the data providing device 10 is a device that can execute a part of the functions of the autonomous behavior robot 1.
  • the data providing device 10 is installed at a location physically close to the robot 2, communicates with the robot 2, It is an edge server that distributes the processing load.
  • the autonomous behavior robot 1 is configured by the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. Is also good.
  • the robot 2 is a robot that can move based on spatial data, and is an embodiment of a robot whose moving range is determined based on spatial data.
  • the data providing apparatus 10 may be configured in one housing or may be configured in a plurality of housings.
  • the first communication control unit 11 controls a communication function with the robot 2.
  • the communication method with the robot 2 is arbitrary, and for example, a wireless LAN (Local Area Network), Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used.
  • the functions of the captured image acquisition unit 111, the spatial data providing unit 112, and the instruction unit 113 included in the first communication control unit 11 communicate with the robot 2 using communication functions controlled by the first communication control unit 11.
  • the photographed image acquiring unit 111 acquires a photographed image photographed by the photographing unit 21 of the robot 2.
  • the photographing unit 21 is provided in the robot 2 and can change the photographing range as the robot 2 moves.
  • the imaging unit 21, the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, the notification operation execution unit 26, the notification unit 27, and the movement mechanism 29 of the robot 2 will be described.
  • the photographing unit 21 photographs the space around the robot 2 and generates a photographed image including a space element.
  • the space element is an element that exists in the space around the robot 2 and configures the space, and is, for example, a wall, a step, a door, furniture, home appliances, luggage, houseplants, and the like placed in the room.
  • the photographing unit 21 can be composed of one or a plurality of cameras. For example, when the photographing unit 21 is a stereo camera including two cameras, the photographing unit 21 can three-dimensionally photograph a spatial element to be photographed from different photographing angles.
  • the photographed image may be a plurality of image data photographed by each camera or one image data obtained by combining the plurality of image data.
  • the imaging unit 21 is a video camera using an imaging element such as a CCD (Charge-Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the photographing unit 21 may be a camera using a ToF (Time of Flight) technology.
  • ToF Time of Flight
  • the shape of a spatial element can be measured by irradiating modulated infrared light to the spatial element and measuring the distance to the spatial element.
  • the photographing unit 21 may be a camera using a structured light.
  • a structured light is a light that projects light of a stripe or lattice pattern onto a spatial element.
  • the imaging unit 21 can measure the shape of the spatial element from the distortion of the projected pattern by imaging the spatial element from a different angle from the structured light.
  • the photographing unit 21 may be any one of these cameras or a combination of two or more of them.
  • the photographing unit 21 is attached to the robot 2 and moves in accordance with the movement of the robot 2.
  • the imaging unit 21 may be installed separately from the robot 2.
  • the captured image captured by the capturing unit 21 is provided to the captured image acquisition unit 111 in a communication method corresponding to the first communication control unit 11.
  • the captured image is temporarily stored in the storage unit of the robot 2, and the captured image acquisition unit 111 acquires the temporarily stored captured image in real time or at a predetermined communication interval.
  • the marker recognizing unit 22 recognizes a predetermined marker included in the image captured by the image capturing unit 21.
  • the marker is a space element indicating a restriction on movement of the robot 2.
  • the movement restriction refers to restricting an operation associated with the movement of the robot 2, for example, restricting the movement speed of the robot 2, prohibiting the robot 2 from entering, or performing a predetermined operation of the robot 2 during the movement. (For example, generation of a sound from the robot 2).
  • the marker is the shape, pattern or color of the article recognizable from the captured image or a combination thereof.
  • the marker may be a planar article or a three-dimensional article.
  • the marker is, for example, a sticker or paper on which a two-dimensional code or a specific color combination or shape is printed.
  • the marker may be an ornament or a rug having a specific color or shape.
  • the movement of the robot can be restricted by the user's intention without impairing the atmosphere of the room.
  • the user since the user can visually recognize the marker, the user can intuitively grasp the movement restriction range, and can easily change the restriction range.
  • the marker is set by the user, for example, by being attached to a wall or furniture, or placed on the floor.
  • the marker recognizing unit 17 can recognize that the movement of the robot 2 is restricted by recognizing the marker image included in the captured image.
  • the marker recognizing unit 22 stores the visual characteristics of the marker in advance. For example, the marker recognizing unit 22 previously stores a two-dimensional code or a three-dimensional object to be recognized as a marker.
  • the marker recognizing unit 22 may recognize an object registered in advance by a user as a marker. For example, when a user registers a flowerpot photographed by the camera of the user terminal 3 as a marker, the flowerpot installed in a corridor or the like can be recognized as a marker. Therefore, the user can install an object that does not cause discomfort at the place where the marker is installed as the marker.
  • the marker recognizing unit 22 may recognize a spatial element other than an object as a marker. For example, the marker recognizing unit 22 may recognize a gesture of the user, such as a user crossing an arm in front of the body, as a marker. The marker recognizing unit 22 recognizes a position where the user makes a gesture as a marker installation position.
  • the marker recognizing unit 22 recognizes the position where the marker is attached or installed (hereinafter, referred to as “installation position”).
  • the installation position is a position in the space where the marker in the space data is installed.
  • the installation position can be recognized based on, for example, the spatial data recognized by the robot 2 based on the distance between the current position of the robot 2 and the captured marker. For example, when the size of the marker is known in advance, the marker recognizing unit 22 calculates the distance between the robot 2 and the marker from the size of the marker image included in the captured image, and determines the current position of the robot 2 and the capturing direction ( For example, the installation position of the marker can be recognized based on an azimuth (not shown).
  • the installation position may be recognized from a relative position from a spatial element whose position in space is already known to a marker. For example, when the position of the door is already known, the marker recognizing unit 22 may recognize the installation position from the relative position of the marker and the door. Further, when the captured image is captured by a depth camera, the installation position can be recognized based on the depth of the marker captured by the depth camera.
  • the movement control unit 23 controls the movement mechanism 29.
  • the movement control unit 23 can control the movement mechanism 29 based on the movement route (described later) calculated by the search unit 25.
  • the movement control unit 23 controls the moving direction and the moving speed of the robot 2 by controlling the moving mechanism 29.
  • the movement control unit 23 can recognize the current position of the robot 2 based on the spatial data recognized by the robot 2.
  • the movement control unit 23 can move the robot 2 by controlling the movement mechanism 29 from the current position on the movement path.
  • the movement control unit 23 may move the movement route by appropriately correcting the current position from a space element such as a wall or a corridor, for example.
  • the movement control unit 23 may control the movement mechanism 29 based on the marker recognized by the marker recognition unit 22.
  • the movement control unit 23 restricts movement by the movement mechanism 29 based on the installation position of the marker recognized by the marker recognition unit 22.
  • the marker installation position includes a point, a line, a surface, or a space set based on one or more marker installation positions.
  • the spatial data providing unit 112 provides the robot 2 with the spatial data generated by the spatial data generating unit 13.
  • the spatial data is data obtained by converting spatial elements recognized by the robot in the space where the robot 2 exists.
  • the robot 2 can move within a range defined by the spatial data. That is, the spatial data functions as a map for determining a movable range in the robot 2.
  • the robot 2 is provided with spatial data from the spatial data providing unit 112.
  • the spatial data can include position data of spatial elements such as walls, furniture, appliances, steps, etc., on which the robot 2 cannot move.
  • the robot 2 can determine whether or not it is a place where it can move, based on the provided spatial data.
  • the robot 2 may be configured to be able to recognize whether or not an ungenerated range is included in the spatial data. Whether or not the ungenerated range is included can be determined, for example, based on whether or not a part of the spatial data includes a space having no spatial element.
  • the instructing unit 113 instructs the robot 2 to shoot based on the spatial data generated by the spatial data generating unit 13.
  • the spatial data generating unit 13 creates spatial data based on the captured image acquired by the captured image acquiring unit 111. For example, when creating indoor spatial data, spatial data is not created for a part that is not imaged. May be included. Further, if the captured image is unclear or the like, the generated spatial data may include noise and the spatial data may include an inaccurate portion. If there is an ungenerated portion in the spatial data, the instruction unit 113 may issue a shooting instruction for the ungenerated portion. In addition, when the space data includes an inaccurate portion, the instructing unit 113 may instruct the imaging of the inaccurate portion. The instruction unit 113 may spontaneously instruct shooting based on the spatial data.
  • the instruction unit 113 may instruct shooting based on an explicit instruction from a user who has confirmed visualization data (described later) generated based on spatial data.
  • the user can specify a region included in the visualization data and instruct the robot 2 to shoot, thereby recognizing a space and generating space data.
  • the instructing unit 113 may instruct to shoot a marker set in the area.
  • the shooting in the area where the creation of the spatial data is instructed may include, for example, shooting conditions such as the coordinate position of the robot 2 (the shooting unit 21), the shooting direction of the shooting unit 21, and the resolution.
  • the spatial data generating unit 13 adds the newly created spatial data to the existing spatial data. If the designated spatial data is for re-creation, the spatial data is generated by updating the existing spatial data.
  • spatial data including the recognized marker may be generated.
  • the point cloud data generation unit 12 generates three-dimensional point cloud data of a spatial element based on the captured image acquired by the captured image acquisition unit 111.
  • the point cloud data generation unit 12 generates point cloud data by converting a spatial element included in the captured image into a set of three-dimensional points in a predetermined space.
  • the space elements are, as described above, the walls, steps, doors, furniture, home appliances, luggage, houseplants, and the like placed in the room. Since the point cloud data generation unit 12 generates the point cloud data based on the captured image of the space element, the point cloud data represents the shape of the surface of the captured space element.
  • the photographed image is generated by the photographing unit 21 of the robot 2 photographing at a predetermined photographing position at a predetermined photographing angle.
  • the space data generation unit 13 generates space data that defines a movable range of the robot 2 based on the point cloud data of the space elements generated by the point cloud data generation unit 12. Since spatial data is generated based on point cloud data in space, spatial elements included in spatial data also have three-dimensional coordinate information. The coordinate information may include information on the position, length (including height), area, or volume of the point.
  • the robot 2 can determine the movable range based on the position information of the spatial element included in the generated spatial data. For example, when the robot 2 has the moving mechanism 29 that horizontally moves on the floor, the robot 2 is configured such that the step from the floor, which is a spatial element in the spatial data, is equal to or more than a predetermined height (for example, 1 cm or more).
  • the robot 2 determines that the height from the floor is higher than the predetermined height (for example, 60 cm or more).
  • the range of ()) is determined as a movable range in consideration of the clearance with its own height.
  • the robot 2 determines, in the spatial data, a range in which the gap between the wall and the furniture, which is a space element, is equal to or greater than a predetermined width (for example, 40 cm or more) as a movable range in consideration of a clearance between itself and the width. I do.
  • the space data generation unit 13 may set attribute information for a predetermined area in the space.
  • the attribute information is information that defines a moving condition of the robot 2 for a predetermined area.
  • the moving condition is, for example, a condition that defines a clearance with a space element to which the robot 2 can move. For example, when a normal moving condition under which the robot 2 can move has a clearance of 30 cm or more, attribute information in which the clearance for a predetermined area is exceptionally 5 cm or more can be set. Further, as the movement condition set in the attribute information, information for restricting movement of the robot may be set.
  • the movement restriction is, for example, a restriction on a moving speed or a prohibition of entry.
  • attribute information in which the moving speed of the robot 2 is reduced may be set in an area having a small clearance or an area where a person exists.
  • the moving condition set in the attribute information may be determined by the floor material in the area.
  • the attribute information may be for setting a change in the operation (running speed or running means, etc.) of the moving mechanism 29 when the floor is a cushion floor, flooring, tatami, or carpet.
  • the attribute information may be set such that a charging spot at which the robot 2 can be moved and charged, a step at which movement of the robot 2 is restricted because the posture of the robot 2 becomes unstable, or a moving condition at a carpet edge or the like can be set.
  • the area in which the attribute information is set may be configured so that the user can grasp the area, for example, by changing the display method in the visualization data described later.
  • the spatial data generation unit 13 performs, for example, Hough transform on the point cloud data generated by the point cloud data generation unit 12 to extract a graphic such as a straight line or a curve common to the point cloud data.
  • Spatial data is generated according to the contour of the represented spatial element.
  • the Hough transform is a coordinate transformation method for extracting a figure which passes through the feature points most, when the point group data is a feature point. Since the point cloud data expresses the shape of a space element such as furniture placed in a room in a point cloud, the user determines which space element is represented by the point cloud data (for example, Recognition of tables, chairs, walls, etc.) can be difficult.
  • the spatial data generation unit 13 can represent the outline of furniture or the like by Hough transforming the point cloud data, so that the user can easily determine the spatial element.
  • the space data generation unit 13 converts the point cloud data generated by the point cloud data generation unit 12 into a basic shape of a space element (for example, a table, a chair, a wall, or the like) recognized in the image recognition, and converts the space into a basic shape. Data may be generated.
  • a spatial element such as a table is recognized as a table by image recognition
  • the shape of the table is converted from a part of the point group data of the spatial element (for example, point cloud data when the table is viewed from the front). It can be accurately predicted.
  • the spatial data generating unit 13 can generate spatial data in which spatial elements are accurately grasped.
  • the space data generation unit 13 generates space data based on point cloud data included in a predetermined range from the position where the robot 2 has moved.
  • the predetermined range from the position where the robot 2 has moved includes the position where the robot 2 has actually moved, and may be, for example, a range having a distance of 30 cm or the like from the position where the robot 2 has actually moved. Since the point cloud data is generated based on an image captured by the image capturing unit 21 of the robot 2, the captured image may include a spatial element at a position distant from the robot 2. When the space element is separated from the imaging unit 21, there may be a part where the robot 2 cannot actually move due to the presence of a part that has not been captured or the presence of an obstacle that has not been captured.
  • the spatial element extracted at the feature point may be distorted.
  • the shooting distance is long, the spatial element included in the shot image is small, and the accuracy of the point cloud data may be low.
  • the spatial data generating unit 13 may generate spatial data that does not include a low-accuracy spatial element or a distorted spatial element by ignoring feature points that are far apart.
  • the spatial data generation unit 13 deletes point cloud data outside a predetermined range from the position to which the robot 2 has moved to generate spatial data, thereby preventing an enclave where no data actually exists from occurring. Therefore, it is possible to generate spatial data that does not include a range in which the robot 2 cannot move and has high data accuracy. Also, in the visualization data generated from the spatial data, engraved drawing can be prevented, and the visibility can be improved.
  • the spatial data generation unit 13 sets a limited range for the generated spatial data. By setting a limit range for the spatial data, the limit range can be visualized as a part of the visualization data.
  • the spatial data generating unit 13 sets the state information for the spatial data. By setting the state information for the spatial data, the state information can be made a part of the visualization data.
  • the visualization data generation unit 14 generates visualization data based on the spatial data generated by the spatial data generation unit 13 so that a person can intuitively determine a space element included in the space.
  • a robot has various sensors such as a camera and a microphone, and recognizes surrounding conditions by comprehensively judging information obtained from those sensors.
  • the movement route may not be appropriate because the object cannot be recognized correctly. Due to erroneous recognition, for example, even if a person thinks that there is a sufficiently large space, the robot may recognize that there is an obstacle and that the robot can move only in a small area.
  • the robot performs an action that is contrary to human expectations, and the human feels stress.
  • the autonomous behavior robot 1 in the present embodiment visualizes and provides spatial data, which is its own recognition state, to a person in order to reduce inconsistencies between the recognition of the person and the robot, and also provides a method for recognizing a part pointed out by the person.
  • the recognition process can be performed again.
  • the spatial data is data including a spatial element recognized by the autonomous robot 1
  • the visualization data is data for allowing the user to visually recognize the spatial element recognized by the autonomous robot 1.
  • the spatial data may include a misrecognized spatial element. By visualizing the spatial data, it becomes easier for a person to confirm the recognition state (whether or not there is an erroneous recognition) of the spatial element in the autonomous robot 1.
  • Visualization data is data that can be displayed on a display device.
  • the visualization data is a so-called floor plan, and includes a space element recognized as a table, a chair, a sofa, or the like in an area surrounded by the space element recognized as a wall.
  • the visualization data generation unit 14 generates a shape of furniture or the like formed in the figure extracted by the Hough transform as, for example, visualization data represented by RGB data.
  • the spatial data generation unit 13 generates visualization data in which the drawing method of the plane is changed based on the direction of the three-dimensional plane of the spatial element.
  • the direction of the three-dimensional plane of the spatial element is, for example, the direction of the normal to the plane formed by the figure generated in the point cloud data by subjecting the point cloud data generated by the point cloud data generation unit 12 to Hough transform. It is.
  • the visualization data generation unit 14 generates visualization data in which the plane drawing method is changed according to the normal direction.
  • the drawing method is, for example, a color attribute such as hue, lightness or saturation to be given to a plane, a pattern to be given to a plane, or a texture. For example, when the normal of the plane is in the vertical direction (the plane is the horizontal direction), the visualization data generation unit 14 increases the brightness of the plane and draws the plane in a bright color.
  • the visualization data generation unit 14 renders the plane with a lower brightness and draws in a dark color.
  • the visualization data may include coordinate information in the visualization data associated with the coordinate information of each space element included in the space data (referred to as “visualization coordinate information”). Since the visualized coordinate information is associated with the coordinate information, a point in the visualized coordinate information corresponds to a point in an actual space, and a surface in the visualized coordinate information corresponds to a surface in an actual space.
  • a conversion function for converting the coordinate system may be prepared so that the coordinate system in the visualization data and the coordinate system in the spatial data can be mutually converted.
  • the coordinate system in the visualization data and the coordinate system in the actual space may be interchangeable.
  • the visualization data generation unit 14 generates visualization data as three-dimensional (3D (Dimensions)) data.
  • the visualization data generation unit 14 may generate the visualization data as two-dimensional (2D) data.
  • the visualization data generation unit 14 may generate the visualization data in 3D when the spatial data generation unit 13 generates enough data to generate the visualization data in 3D.
  • the visualization data generation unit 14 may generate the visualization data in 3D based on the 3D viewpoint position (viewpoint height, viewpoint elevation / depression angle, etc.) specified by the user. By making it possible to specify the viewpoint position, it is possible for the user to easily check the shape of furniture or the like.
  • the visualization data generation unit 14 may generate visualization data in which only the back wall or ceiling of the room is colored, and the front wall or ceiling is transparent (not colored). By making the wall on the near side transparent, it is possible for the user to easily check the shape of furniture and the like arranged at the end (in the room) of the wall on the near side.
  • the visualization data generation unit 14 generates visualization data to which a color attribute according to the captured image acquired by the captured image acquisition unit 111 is added. For example, when the captured image includes woodgrain furniture and detects a woodgrain color (for example, brown), the visualization data generation unit 14 assigns a color similar to the detected color to the extracted furniture graphic. Generate visualization data. By giving the color attribute according to the captured image, it is possible for the user to easily confirm the type of the furniture or the like.
  • the visualization data generation unit 14 generates visualization data in which a drawing method of a fixed object that is fixed and a moving object that moves is changed.
  • the fixed object is, for example, a room wall, a step, fixed furniture, or the like.
  • the moving object is, for example, a chair, a trash can, furniture with casters, or the like.
  • the moving object may include, for example, temporary objects such as luggage and bags temporarily placed on the floor.
  • the drawing method is, for example, a color attribute such as hue, lightness or saturation to be given to a plane, a pattern to be given to a plane, or a texture.
  • the classification of fixed objects, moving objects or temporary objects can be identified by the period of their existence at the place.
  • the spatial data generation unit 13 identifies the space element as a fixed object, a moving object, or a temporary object based on a change with time of the point cloud data generated by the point cloud data generation unit 12 and converts the space data. Generate.
  • the spatial data generation unit 13 determines that the spatial element is a fixed object when the spatial element has not changed from the difference between the spatial data generated at the first time and the spatial data generated at the second time. to decide. Further, the spatial data generation unit 13 may determine from the difference in the spatial data that the spatial element is a moving object when the position of the spatial element has changed.
  • the spatial data generation unit 13 may determine from the difference in the spatial data that the spatial element is a temporary when the spatial element has disappeared or has appeared.
  • the visualization data generation unit 14 changes the drawing method based on the section identified by the spatial data generation unit 13.
  • the change of the drawing method is, for example, color coding, addition of hatching, addition of a predetermined mark, or the like.
  • the spatial data generation unit 13 may display a fixed object in black, display a moving object in blue, or display a temporary object in yellow.
  • the spatial data generating unit 13 generates spatial data by identifying a fixed object, a moving object, or a temporary object.
  • the visualization data generation unit 14 may generate the visualization data in which the drawing method is changed based on the section identified by the spatial data generation unit 13. Further, the spatial data generation unit 13 may generate visualization data in which the rendering method of the spatial element recognized by the image recognition is changed.
  • the visualization data generation unit 14 can generate visualization data in a plurality of divided areas. For example, the visualization data generation unit 14 generates the visualization data with a space partitioned by walls such as a living room, a bedroom, a dining room, and a corridor as one room. By generating the visualization data for each room, for example, the generation of the spatial data or the visualization data can be performed separately for each room, and the generation of the spatial data or the like becomes easy. Further, it is possible to create spatial data and the like only for an area where the robot 2 may move.
  • the visualization data providing unit 161 provides visualization data from which a user can select an area. The visualization data providing unit 161 may, for example, enlarge the visualization data of the area selected by the user or provide detailed visualization data of the area selected by the user.
  • the photographing target recognizing unit 15 recognizes a spatial element based on the photographed image acquired by the photographed image acquiring unit 111. Recognition of a spatial element can be performed, for example, by using an image recognition engine that determines what the spatial element is based on image recognition results accumulated in machine learning. The image recognition of the spatial element can be performed based on, for example, the shape, color, and pattern of the spatial element.
  • the imaging target recognition unit 15 may be configured to be able to perform image recognition of a spatial element by using, for example, an image recognition service provided in a cloud server (not shown).
  • the visualization data generation unit 14 generates visualization data in which the drawing method is changed according to the spatial element whose image has been recognized by the imaging target recognition unit 15.
  • the visualization data generation unit 14 when the spatial element whose image has been recognized is a sofa, the visualization data generation unit 14 generates visualization data in which a texture having a texture of cloth is added to the spatial element.
  • the visualization data generation unit 14 may generate visualization data to which a color attribute of wallpaper (for example, white) is added.
  • the second communication control unit 16 controls communication with the user terminal 3 owned by the user.
  • the user terminal 3 is, for example, a smartphone, a tablet PC, a notebook PC, a desktop PC, or the like.
  • the communication method with the user terminal 3 is arbitrary, and for example, wireless LAN, Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used.
  • Each function of the visualization data providing unit 161, the designation acquisition unit 162, and the execution information acquisition unit 163 included in the second communication control unit 16 communicates with the user terminal 3 using the communication function controlled by the second communication control unit 16. connect.
  • the visualization data providing unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
  • the visualization data providing unit 161 is, for example, a Web server, and provides the browser of the user terminal 3 with the visualization data as a Web page.
  • the visualization data providing unit 161 may provide the visualization data to a plurality of user terminals 3. By visually recognizing the visualization data displayed on the user terminal 3, the user can confirm the range in which the robot 2 can move as a 2D or 3D display. In the visualization data, shapes of furniture and the like are drawn by a predetermined drawing method. By operating the user terminal 3, the user can, for example, switch between 2D display and 3D display, zoom in or out on the visualized data, or move the viewpoint in 3D display.
  • the user can visually recognize the visualized data displayed on the user terminal 3 and check the generation state of the spatial data and the attribute information of the area.
  • the user can designate a region in which no spatial data has been generated from the visualized data and instruct creation of the spatial data. Further, the user visually recognizes the visualized data displayed on the user terminal 3 and, if there is an area in which the spatial data is considered to be inaccurate, such as an unnatural shape of a space element such as furniture, is displayed.
  • a region can be designated to instruct the regeneration of spatial data.
  • the region in the visualized data designated to be regenerated by the user can be uniquely specified as the region in the spatial data. .
  • the visualization data is regenerated in the visualization data generation unit 14 based on the regenerated spatial data, and is provided from the visualization data providing unit 161. It should be noted that the generated state of the spatial data may not change even if the spatial element is erroneously recognized even in the regenerated visualized data. In this case, the user may instruct the generation of the spatial data by changing the operation parameter of the robot 2.
  • the operation parameters include, for example, photographing conditions (exposure amount or shutter speed) of the photographing unit 21 of the robot 2, sensitivity of a sensor (not shown), clearance conditions for permitting the movement of the robot 2, and the like.
  • the operation parameters may be included in the spatial data as area attribute information, for example.
  • the visualization data generation unit 14 generates, for example, visualization data including display of a button for instructing creation (including “re-creation”) of spatial data.
  • the user terminal 3 can transmit an instruction for creating spatial data to the autonomous robot 1 by operating the displayed button by the user.
  • the instruction to create spatial data transmitted from the user terminal 3 is acquired by the designation acquisition unit 162.
  • the designation acquisition unit 162 acquires an instruction to create spatial data of the area designated by the user based on the visualization data provided by the visualization data providing unit 161.
  • the designation acquisition unit 162 may acquire an instruction to set (including change) the area attribute information.
  • the designation acquisition unit 162 acquires the position of the region and the direction when the robot approaches the region, that is, the direction to be photographed. Acquisition of the creation instruction can be executed, for example, by operating the Web page provided by the visualization data providing unit 161. Thereby, the user can grasp how the robot 2 recognizes the space, and can instruct the robot 2 to perform the recognition process again according to the recognition state.
  • the state information acquisition unit 24 acquires state information relating to the state of the notification target person searched by the search unit 25.
  • the state of the notification target person is the state of the motion of the notification target person, and is, for example, a sleeping state, a rising state, a sitting state, a standing state, a walking state, or cooking or cleaning. And the like during operation.
  • As the state of the notification target person a health state or a mental state of the notification target person may be acquired.
  • the state information acquisition unit 24 determines the state of the notification target person based on, for example, a captured image of the notification target person captured by the imaging unit 21.
  • the state information acquisition unit 24 is a sound emitted by the notification target person collected by a microphone (not shown), a heat distribution of the notification target person measured by the radiation thermometer, the illuminance of the room measured by the illuminometer, and detected by the proximity sensor.
  • the state of the notification target person may be determined based on the movement of the notification target person, the heart rate acquired from the heart rate meter worn by the notification target, and the like.
  • the state information acquisition unit 24 may determine the sleep state of the notification target person from these pieces of information.
  • the sleep state is, for example, a state such as REM sleep, non-REM sleep, and the depth of sleep that can be measured from a person's heart rate, body movement, and the like.
  • the state information acquisition unit 24 determines in advance the captured image of the notification target person captured by the imaging unit 21, the heat distribution of the notification target person, or the movement of the notification target person in the notification target person detected by the proximity sensor. When it is determined that the movement is small during the set time and the operation range, the notification target person is determined to be in a sleeping state. In addition, the state information acquisition unit 24 determines whether the movement of the notification target person is the movement amount of the notification target person (for example, the integrated value or the average value of the movement amount of the notification target person per time) in a predetermined time and operation range. Is larger than the predetermined amount of movement, the notification target person may be determined to be in the state of getting up.
  • the movement amount of the notification target person for example, the integrated value or the average value of the movement amount of the notification target person per time
  • the predetermined operation amount may be, for example, a value determined by an experiment or the like.
  • the state information acquisition unit 24 may determine that the notification target person is in a sitting state from the posture of the notification target person or the positional relationship between a spatial element such as a chair and the notification target person. Further, the state information acquisition unit 24 may determine that the notification target person is in a standing state from the posture of the notification target person. Further, when the position of the notification target person in the space is moving, the state information acquisition unit 24 may determine that the notification target person is in a state of walking. Further, the state information acquisition unit 24 may determine the work content of the notification target person from the comparison between the operation of the notification target person in the captured image and a previously learned operation pattern, and determine that the notification target person is working.
  • the state information acquisition unit 24 may determine the state of the notification target person based on the lighting state of the lighting and the open / closed state of the door. For example, when the state information acquisition unit 24 determines that the illumination is turned off from the illuminance of the room measured by the illuminometer, the notification target person may determine that the notification target person is in a sleeping state.
  • the search unit 25 searches for a notification target person based on the execution information acquired by the execution information acquisition unit 163.
  • the notification operation execution unit 26 performs a notification operation on the notification target person searched by the search unit 25 based on the execution information.
  • Execution information is information for executing a notification operation of information to be performed to a notification target person, and is information set for the autonomous behavior robot 1 by a user.
  • the execution information includes, for example, information of a notification target person for specifying a notification target person to perform the notification operation, information of a place to search the notification target person, information of the notification operation, and information of a time at which the notification operation is performed. Including.
  • the execution information is set in advance by a user, for example, and provided to the execution information acquisition unit 163.
  • the information of the notification target is information for identifying the notification target, and includes, for example, the name of the notification target, ID (Identification), information indicating physical characteristics, personal belongings, clothing, or the robot 2. Such as intimacy (described later).
  • the information indicating the physical characteristics of the notification target includes, for example, information for recognizing the face of the notification target (face recognition), information for recognizing the fingerprint of the notification target (fingerprint recognition), and the notification target. This is information for recognizing the physique of the person (shape recognition).
  • the information on the personal belongings, clothing, and the like of the notification target person is, for example, information on a wireless tag owned by the notification target person, information indicating the characteristics of the notification target person's clothing, and the like.
  • the search unit 25 can specify the notification target person by determining whether or not the notification target person is based on the information of the notification target person. For example, when a plurality of people are present in one room, the search unit 25 can specify the notification target person based on the information of the notification target person.
  • the information of the notification target person may be stored in advance in the autonomous behavior robot 1 together with an ID for specifying the notification target person, and only the ID may be specified in the execution information.
  • the execution information may include information of one or more notification target persons.
  • the robot 2 can sequentially execute a notification operation on a plurality of notification targets.
  • the information of the notification target includes the notification target
  • a notification operation set in advance for each notification target is executed.
  • the execution order (priority) of the notification operation for a plurality of notification target persons may be set in the execution information.
  • the information on the location (search location) for searching for the notification target is information (location information) of a place where the notification target is expected to exist.
  • location information is, for example, position information indicating a point, line, or range in the space, or information on a room in which the position information in the space is registered in advance.
  • the search unit 25 calculates a moving route by the moving mechanism 29 based on the location information included in the execution information. The movement route can be calculated from the current position of the robot 2 and the search location.
  • the search unit 25 can store a range in which the moving mechanism 29 can move in advance, and calculate a moving route that can move from the current position to the search location in the shortest distance in the movable range.
  • the search unit 25 may calculate the moving speed in the moving route by including the moving speed in the moving route.
  • the search unit 25 can calculate the moving route so as to move while changing the moving speed in the hallway and the moving speed in the room.
  • the movement control unit 23 controls the movement mechanism 29 based on the movement path calculated by the search unit 25 to move the robot 2.
  • the search unit 25 may calculate the movement route based on the movement restriction range in which the movement of the marker recognized by the marker recognition unit 22 is restricted. For example, when the entry prohibition range is set based on the marker, the search unit 25 calculates the moving route so as to avoid the entry prohibition range. When the movement speed is limited based on the marker, the search unit 25 may calculate the movement route so that the movement taking the speed limit into consideration is the shortest time.
  • the location information may be specified by the user.
  • the user specifies location information by specifying a search location on a map displayed on the user terminal 3 operated by the user.
  • the execution information acquisition unit 163 can acquire the location information specified by the user from the user terminal 3.
  • the search unit 25 calculates a moving route by the moving mechanism 29 based on the location information specified by the user from the user terminal 3.
  • the search unit 25 calculates, for example, a movement path from the home position where the robot 2 returns for charging to the search location.
  • the search unit 25 may display an alert on the user terminal 3 when there is a marker, a stair, or the like that restricts movement on the calculated movement route. That is, the search unit 25 may calculate the movement path when the robot 2 moves to the search place (immediately before or during movement), or calculates the movement path when the execution information is set by the user. You may do so.
  • the execution information may include information on one or more search locations.
  • the robot 2 can, for example, if the notification target cannot be searched (discovered) at the first specified search location, the robot 2 can be notified at the other specified search location. Can be searched.
  • the information on the notification operation is information indicating the content of the notification operation performed on the notification target person.
  • the notifying operation executing unit 26 executes the notifying operation on the notification target person searched by the searching unit 25 based on the information of the notifying operation.
  • the notification operation is an operation of notifying the notification target person of the notification information using the notification unit 27. That is, the notification operation execution unit 26 can execute the notification operation via the notification unit 27.
  • the information of the notification operation includes time information at which the notification operation is performed.
  • the search unit 25 searches for a notification target person according to the time information.
  • the notification unit 27 is an output device such as a speaker, a display, or an actuator.
  • the loudspeaker reports information on the hearing of the person to be reported by sound (including voice).
  • the display is, for example, a display, a light, or the like, and notifies information to the notification target person's vision by information (character, image, light, or the like) displayed on the display.
  • the actuator is, for example, a movable unit such as a robot hand, a vibration generator, or a compressed air output valve, and notifies information on the tactile sensation of the notification target person.
  • the notifying unit 27 may notify information on the sense of smell or taste of the notification target person.
  • the notification operation is, for example, output of a sound from a speaker to a notification target person, output of display information from a display, or contact of the notification target person with a robot hand.
  • the notification operation may be a combination of these notification operations.
  • the notification operation can be specified by the user.
  • the notification operation may specify a purpose of notification such as “alarm operation” or “time notification operation”.
  • the notification operation execution unit 26 may execute the notification operation according to the information of the notification target person. For example, when the goodness of getting up of the person to be notified is stored as the information of the person to be notified, the notifying operation executing unit 26 executes the notifying operation according to the goodness of getting up of the person to be notified.
  • the goodness of getting up can be evaluated based on the time from when the notification operation is performed to when the state of the notification target person becomes other than the sleep state. For example, as the information of the notification target person, the notification target person wakes up poorly (for example, the average value of the time from when the notification operation is performed to when the state of the notification target person becomes other than the sleep state is larger than the wake-up threshold).
  • the notification operation execution unit 26 may execute the notification operation with a sound of a predetermined volume or more.
  • the notification operation execution unit 26 may control so as to increase the time for outputting the sound.
  • the notification operation executing unit 26 may control the type of sound to be the first type of sound instead or in addition to the above.
  • the notification operation executing unit 26 may control the increase of the volume per time to be relatively large instead of or in addition to the above. Also, when it is stored that the notification target person is good to wake up (for example, the average value of the time from when the notification operation is performed until the state of the notification target person becomes other than the sleep state is smaller than the wake-up threshold). The notifying operation executing unit 26 may execute the notifying operation with a sound of a predetermined volume or less so as not to surprise the notification target person. In addition, instead of or in addition to increasing the volume, the notification operation execution unit 26 may perform control so as to shorten the time for outputting the sound.
  • the notification operation execution unit 26 may control the type of sound to be a second type of sound different from the first type of sound instead or in addition to the above.
  • the notification operation executing unit 26 may control the increase of the volume per time to be relatively small instead of or in addition to the above.
  • the notification operation execution unit 26 may change the notification operation according to the intimacy with the notification target person (described later), or may change the notification operation based on past data.
  • the “alarm operation” is a notification operation for waking up a notification target person who is sleeping during sleep based on time information.
  • the alarming operation executing unit 26 outputs an alarming sound from the speaker of the alarming unit 27, displays the current time on a display, or contacts the alarming person by a robot arm to notify the alarm. Wake up the subject.
  • the notification operation executing unit 26 causes the alarm target operation to be executed when the notification target person searched by the search unit 25 is sleeping. Whether or not the notification target person is sleeping can be determined by the state information acquisition unit 24.
  • the notification operation execution unit 26 may repeat the wake-up operation until the notification target person wakes up.
  • Whether or not the notification target person has woken up can be determined by the state information acquisition unit 24. It should be noted that the notification target person may wake up not to sleep twice (being once awake and then sleeping again). The notification operation execution unit 26 may repeat the wake-up operation when the state information acquisition unit 24 determines that the user has slept twice.
  • the “time notification operation” is an operation of notifying a notification target person of a time designated in advance.
  • the time notification operation for the notification target person is, for example, an operation of notifying the start time of the television broadcast.
  • the notification operation execution unit 26 may output a voice prompting the notification target person to view the television at a preset time.
  • the notification operation execution unit 26 may turn on the power of the television at a preset time.
  • the notification operation execution unit 26 may record a television program when the notification target person is not watching television. Whether or not the notification target is watching the television can be determined by the state information acquisition unit 24 by determining whether or not the notification target is sitting at the place where the television is viewed.
  • the time notification operation for the notification target person may be an operation for notifying the notification target person of the leaving time.
  • the notification operation execution unit 26 may output a voice prompting the notification target person to go out when a preset time comes.
  • the notification operation execution unit 26 may execute the door closing when the notification target person goes out after a preset time. Whether or not the notification target person has gone out can be determined by the state information acquisition unit 24 determining that the person has left the entrance.
  • the notification operation information may include a notification operation completion condition.
  • the notification operation completion condition is a condition for regarding that the notification operation for the notification target has been completed.
  • the completion condition of the notification operation may include an operation when the completion condition is not satisfied.
  • the notification operation execution unit 26 may continue or repeat the notification operation until the completion condition is satisfied.
  • the completion condition of the notification operation may be provided to the autonomous behavior robot 1 as information of the notification operation, or may be set in the autonomous behavior robot 1 in advance. The following is an example of a completion condition when the notification operation is a wake-up operation.
  • the completion condition may be “when it is determined that the notification target person has woken up at the search location”.
  • the notification operation has been achieved, and thus the notification operation can be completed.
  • Whether or not the notification target person has woken up can be determined by the state information acquisition unit 24 as described above.
  • the completion condition may be “when the notification target person cannot be searched at the search place”.
  • the notification target person cannot be searched at the search location, the notification target person can be regarded as having already woken up, and thus the notification operation can be completed. Whether or not the notification target person has been searched at the search place can be determined by the state information acquisition unit 24.
  • the completion condition may be “when a notification target person is found in a place other than the search place”.
  • the place other than the search place is, for example, a place on the movement route of the robot 2.
  • the notification target person can be regarded as having already woken up, so that the notification operation can be completed.
  • the discovery of the notification target person can be performed, for example, by the state information acquisition unit 24 performing image recognition of the notification target person with respect to the captured image, or voice recognition of the voice to the robot 2 from the notification target person. .
  • the completion condition may be “when a pleasant action from the notification target person is detected”.
  • the pleasant action is an action of the notification target person of a predetermined type. For example, the notification target person “strokes” the robot 2 or “acts to thank” or “greets” the robot 2. Act.
  • the notification target person can be regarded as having already woken up, so that the notification operation can be completed.
  • the detection of a pleasant act from the notification target person can be performed by, for example, the state information acquisition unit 24.
  • the information of the notification operation may include a responsibility level of the notification operation.
  • the responsibility level of the notification operation is information indicating the importance of the notification operation to the notification target person. For example, when the responsibility level is high, the notification operation execution unit 26 repeatedly executes the notification operation until the state of the notification target person becomes a predetermined state. For example, when the responsibility level is high in the wake-up operation, the notification operation execution unit 26 continues the wake-up operation until the notification target person wakes up. When the responsibility level is high, the robot 2 may start searching for the notification target person early so that the wake-up operation can be reliably performed at the set time.
  • the notification operation executing unit 26 may execute the wake-up operation a preset number of times.
  • the robot 2 may start searching for the notification target person before a predetermined time from the set time.
  • the notification operation executing section 26 may execute the wake-up operation only once.
  • the robot 2 may start searching for a notification target person after execution of another notification operation (for example, a wake-up operation for another notification target) ends.
  • the intimacy between the notification target person and the robot 2 is information obtained by indexing the subjective feeling that the notification target person feels about the robot 2.
  • the notification target person may have a feeling of familiarity with the robot 2 depending on the shape of the robot 2 (for example, a shape of a human or an animal), a voice output by the robot 2, an operation of the robot 2, and the like.
  • the notification target person may have a familiar feeling based on the experience of past contact with the robot 2 or the like.
  • the intimacy level is expressed, for example, as a percentage (0 to 100%) or a plurality of levels (S, A, B, and C levels) based on the emotion of the notification target person.
  • the autonomous behavior robot 1 may store the intimacy and update it in accordance with the past actions and opinions of the notification target person. For example, the autonomous behavior robot 1 may be updated so as to increase intimacy when the notification target performs a pleasant act on the robot 2.
  • the intimacy may be set for each notification target.
  • the intimacy level may be set for each robot 2. For example, when there is one notification target person A and two robots 2 including the robots 2a and 2b, each of the robots 2a and 2b can set the degree of intimacy with the notification target person A.
  • the notification operation based on the execution information may be performed according to the intimacy between the notification target person and the robot 2.
  • the robot 2a having a high degree of closeness performs the notification operation including contact with the notification target person
  • the robot 2b having a low degree of closeness outputs a voice output to the notification target person. May be executed only.
  • the robot 2a with high intimacy may execute the notification operation in cooperation with the robot 2b or instead of the robot 2.
  • the notification operation may be performed such that the robot 2a and the robot 2b cooperate or compete with each other.
  • the execution information acquisition unit 163 acquires execution information for performing a notification operation of information to be performed to a notification target person.
  • the execution information acquisition unit 163 can acquire execution information from the user terminal 3.
  • the execution information acquisition unit 163 may acquire execution information by receiving execution information transmitted from the user terminal 3 by an operation of the user who operates the user terminal 3. Further, the execution information acquisition unit 163 may acquire the execution information by downloading the execution information stored in the storage unit of the user terminal 3. Further, the execution information acquisition unit 163 may acquire the execution information from a data server (not shown).
  • execution information acquisition unit 163 is configured to operate the map displayed on the user terminal 3 by the user based on the visualization data provided to the user based on the visualization data provided by the visualization data providing unit 161.
  • the specified location information can be obtained from the user terminal.
  • FIG. 1 illustrates a case in which the autonomous behavior robot 1 is configured such that the data providing device 10 and the robot 2 are separated from each other. May be included.
  • the robot 2 may include all functions of the data providing device 10.
  • the data providing device 10 may be a device that temporarily substitutes a function when the processing capability of the robot 2 is insufficient, for example.
  • “acquisition” may mean that the subject to be acquired actively acquires, or the subject to acquire may passively acquire.
  • the designation acquisition unit 162 may acquire the space data by receiving an instruction for creating spatial data transmitted from the user terminal 3 by the user, or store the data in a storage area (not shown) not shown by the user.
  • the instruction for creating the created spatial data may be obtained by reading from the storage area.
  • the functional units of the acquiring unit 111, the spatial data providing unit 112, the instruction unit 113, the visualization data providing unit 161, the designation acquiring unit 162, and the execution information acquiring unit 163 are examples of the function of the autonomous behavior robot 1 in the present embodiment. It is shown, and does not limit the functions of the autonomous behavior robot 1.
  • the autonomous behavior robot 1 does not need to have all the functional units included in the data providing device 10 and may have some functional units.
  • the autonomous behavior robot 1 may have other functional units other than those described above.
  • each function unit of the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 of the robot 2 has the functions of the autonomous behavior robot 1 in the present embodiment. This is an example, and does not limit the functions of the autonomous robot 1.
  • the autonomous behavior robot 1 does not need to have all the functional units of the robot 2 but may have some of the functional units.
  • the above-described functional units of the autonomous behavior robot 1 have been described as being realized by software as described above. However, at least one or more of the above functions of the autonomous behavior robot 1 may be realized by hardware.
  • any of the above functions of the autonomous behavior robot 1 may be implemented by dividing one function into a plurality of functions. Further, any two or more of the functions of the autonomous behavior robot 1 may be integrated into one function. That is, FIG. 1 illustrates the functions of the autonomous behavior robot 1 by functional blocks, and does not indicate that each function is configured by a separate program file, for example.
  • the autonomous behavior robot 1 may be a device realized by one housing or a system realized by a plurality of devices connected via a network or the like.
  • the autonomous behavior robot 1 may realize some or all of its functions by a virtual device such as a cloud service provided by a cloud computing system. That is, the autonomous behavior robot 1 may realize at least one or more of the above functions in another device.
  • the autonomous behavior robot 1 may be a general-purpose computer such as a tablet PC, or may be a dedicated device having limited functions.
  • the autonomous behavior robot 1 may realize some or all of its functions in the robot 2 or the user terminal 3.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot 1 according to the embodiment.
  • the autonomous behavior robot 1 has a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, a touch panel 104, a communication I / F (Interface) 105, a sensor 106, and a clock 107. .
  • the autonomous behavior robot 1 is a device that executes the autonomous behavior robot control program described with reference to FIG.
  • the CPU 101 controls the autonomous behavior robot 1 by executing the autonomous behavior robot control program stored in the RAM 102 or the ROM 103.
  • the autonomous behavior type robot control program is acquired from, for example, a recording medium on which the autonomous behavior type robot control program is recorded, or a program distribution server via a network, installed in the ROM 103, read from the CPU 101, and executed.
  • the touch panel 104 has an operation input function and a display function (operation display function).
  • the touch panel 104 enables a user of the autonomous behavior robot 1 to perform an operation input using a fingertip, a touch pen, or the like.
  • the autonomous behavior robot 1 uses the touch panel 104 having an operation display function.
  • the autonomous behavior robot 1 has a display device having a display function and an operation input device having an operation input function separately. You may have.
  • the display screen of the touch panel 104 can be implemented as the display screen of the display device, and the operation of the touch panel 104 can be implemented as the operation of the operation input device.
  • the touch panel 104 may be realized by various forms such as a head-mounted type, glasses type, and wristwatch type displays.
  • the communication I / F 105 is a communication I / F.
  • the communication I / F 105 executes, for example, short-range wireless communication such as a wireless LAN, a wired LAN, and infrared rays. Although only the communication I / F 105 is shown in FIG. 2 as the communication I / F, the autonomous behavior robot 1 may have each communication I / F in a plurality of communication methods.
  • the communication I / F 105 may perform communication with a control unit that controls the imaging unit 21 (not shown) or a control unit that controls the moving mechanism 29.
  • the sensor 106 is hardware such as a camera of the photographing unit 21, a TOF or a thermo camera, and hardware such as a microphone, a thermometer, an illuminometer, or a proximity sensor. Data obtained by these hardware is stored in the RAM 102 and processed by the CPU 101.
  • the clock 107 is an internal clock for acquiring time information.
  • the time information acquired by the clock 107 is used, for example, to confirm the time at which the notification operation is performed.
  • the microphone 108 collects surrounding sounds.
  • the microphone 108 collects, for example, the voice of the notification target person.
  • the speaker 109a, the display 109b, and the actuator 109c are specific hardware examples of the notification unit 27 described with reference to FIG.
  • the speaker 109a outputs sound
  • the display 109b outputs display data
  • the actuator 109c is a movable unit.
  • the notification unit 27 may have hardware other than the speaker 109a, the display 109b, and the actuator 109c.
  • FIG. 3 is a flowchart illustrating a first example of the operation of the robot control program according to the embodiment.
  • the execution subject of the operation is the autonomous robot 1, but each operation is executed in each function of the autonomous robot 1 described above.
  • the autonomous behavior robot 1 determines whether or not a captured image has been acquired (step S11).
  • the determination as to whether or not a captured image has been acquired can be made based on whether or not the captured image acquisition unit 111 has acquired a captured image from the robot 2.
  • the determination as to whether or not a captured image has been obtained is made in units of processing of the captured image. For example, when the captured image is a moving image, the moving image is continuously transmitted from the robot 2. Therefore, the determination as to whether or not the captured image has been obtained depends on whether the number of frames or the data amount of the obtained moving image is a predetermined value.
  • the captured image may be acquired mainly by the mobile robot and transmitting the captured image, or may be acquired by the captured image acquisition unit 111 and taking the captured image from the mobile robot. If it is determined that the photographed image has not been acquired (step S11: NO), the autonomous behavior robot 1 repeats the process of step S11 and waits for the photographed image to be acquired.
  • the autonomous robot 1 determines that the captured image has been acquired (step S11: YES).
  • the point cloud data is generated by the point cloud data generation unit 12 detecting, for example, a point having a large change in luminance in a captured image as a feature point, and giving three-dimensional coordinates to the detected feature point. Can be performed.
  • the feature point may be detected, for example, by performing a differentiation process on the captured image, detecting a change in the gradation of the captured image, and detecting a portion having a large change in the gradation.
  • the assignment of the coordinates to the feature points may be executed by detecting the same feature point photographed from different photographing angles.
  • the determination as to whether or not a captured image has been obtained in step S11 can be made based on whether or not captured images captured from a plurality of directions have been obtained.
  • step S13 the autonomous behavior robot 1 After executing the process of step S12, the autonomous behavior robot 1 generates spatial data and recognizes a marker (step S13).
  • the generation of the spatial data can be executed by the spatial data generating unit 13 performing, for example, Hough transform of the point cloud data.
  • the details of step S13 will be described with reference to FIG.
  • the autonomous behavior robot 1 After executing the processing in step S13, the autonomous behavior robot 1 provides the generated spatial data to the robot 2 (step S14).
  • the spatial data may be provided to the robot 2 every time the spatial data is generated, or may be provided asynchronously with the processing shown in steps S11 to S18. Good.
  • the robot 2 provided with the spatial data can grasp the movable range based on the spatial data.
  • the autonomous behavior robot 1 determines whether or not to recognize a space element (step S15).
  • the determination as to whether or not to recognize a spatial element can be performed by, for example, setting whether to recognize a spatial element in the imaging target recognition unit 15. Even if it is determined that the space element is recognized, if the recognition fails, it may be determined that the space element is not recognized.
  • the autonomous behavior robot 1 If it is determined that the space element is recognized (step S15: YES), the autonomous behavior robot 1 generates first visualization data (step S16).
  • the generation of the first visualization data can be executed by the visualization data generation unit 14.
  • the first visualization data is visualization data generated after the imaging target recognition unit 15 recognizes a spatial element. For example, when the imaging target recognizing unit 15 determines that the spatial element is a table, the visualization data generation unit 14 determines whether the spatial element is a table even if the upper surface of the table is not imaged and has no point cloud data. The visualization data can be generated as if the top surface were flat. Further, when it is determined that the space element is a wall, the visualization data generation unit 14 can generate the visualization data assuming that a part that is not photographed is also a plane.
  • the autonomous behavior robot 1 If it is determined that the space element is not recognized (step S15: NO), the autonomous behavior robot 1 generates the second visualization data (step S17).
  • the generation of the second visualization data can be executed by the visualization data generation unit 14.
  • the second visualization data is visualization data generated by the imaging target recognition unit 15 without recognizing a spatial element, that is, based on point cloud data and spatial data generated from a captured image.
  • the autonomous behavior robot 1 can reduce the processing load by not performing the recognition processing of the space element.
  • the autonomous behavior robot 1 After executing the processing of step S16 or the processing of step S17, the autonomous behavior robot 1 provides visualization data (step S18).
  • the visualization data is provided by the visualization data providing unit 161 providing the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
  • the autonomous behavior robot 1 may generate and provide visualization data in response to a request from the user terminal 3, for example.
  • the autonomous behavior robot 1 ends the operation shown in the flowchart.
  • FIG. 4 is a flowchart illustrating a second example of the operation of the robot control program according to the embodiment.
  • the autonomous behavior robot 1 generates spatial data (step S131).
  • the generation of the spatial data can be executed by the spatial data generating unit 13 performing, for example, Hough transform of the point cloud data.
  • the autonomous behavior robot 1 determines whether or not the marker has been recognized (step S132). Whether or not the marker has been recognized can be determined based on whether or not the marker recognizing unit 22 has recognized the image of the marker in the image captured by the image capturing unit 21.
  • the robot 2 can notify the data providing device 10 of the marker recognition result.
  • step S132 If it is determined that the marker has been recognized (step S132: YES), the autonomous behavior robot 1 sets a limited range in which movement is limited to the spatial data generated in step S121 (step S133).
  • step S133 After executing the processing of step S133, or when determining that the marker has not been recognized (step S132: NO), the autonomous behavior robot 1 ends the operation of generating the provided data of step S13 shown in the flowchart.
  • FIG. 5 is a flowchart when the autonomous behavior robot control program according to the embodiment executes a wake-up operation as a notification operation.
  • the autonomous behavior robot 1 determines whether or not the execution information has been acquired (step S21). The determination as to whether the execution information has been acquired can be made based on whether the execution information acquisition unit 163 has acquired the execution information from the user terminal 3. If it is determined that the execution information has not been acquired (step S21: NO), the autonomous behavior robot 1 repeats the process of step S21 and waits for the acquisition of the execution information.
  • step S21 YES
  • the autonomous behavior robot 1 calculates a movement route (step S22).
  • the calculation of the moving route can be executed by the search unit 25 calculating based on the location information included in the execution information.
  • the autonomous behavior robot 1 After executing the processing of step S22, the autonomous behavior robot 1 starts searching for a notification target person and starts moving along the calculated moving route (step S23).
  • the search for the notification target person can be executed by the search unit 25.
  • the movement can be executed by controlling the movement mechanism 29 by the movement control unit 23.
  • the start of the search in step S23 is executed based on the time information, and the autonomous robot 1 starts moving before the designated execution time. For example, when the execution time is set at 6:00 am, the autonomous behavior robot 1 considers the traveling time on the traveling route and takes the robot 2 at a time before 6:00 am so that the notification operation can be performed at the execution time. Start moving.
  • the time at which the movement is started may be automatically determined by the autonomous robot 1 based on the time information, or may be manually set by the user.
  • the autonomous behavior robot 1 determines whether the robot 2 has reached the location specified as the search location (step S24). Whether or not the search location has been reached can be determined, for example, by the movement control unit 23 comparing the search location with the current position (coordinate position) of the robot 2. If it is determined that the robot has not reached the search location (step S24: NO), the autonomous robot 1 repeats the process of step S24, and waits for the robot 2 to reach the search location.
  • step S25 determines whether or not the notification target person has been found. Whether the notification target person has been found or not can be determined by, for example, the search unit 25 based on the notification target person information included in the execution information. Note that whether or not the notification target person has been found may be determined, for example, when a person is present at the search location, by regarding that person as the notification target person.
  • step S25: NO the autonomous behavior robot 1 returns to the process of step S23 and starts moving to the next search location.
  • the autonomous behavior robot 1 can sequentially search for a plurality of designated search locations and find a notification target person. If the notification target person cannot be found at the designated search location, the search operation in the illustrated flowchart is terminated, and the fact that the notification target person was not found is recorded or the user terminal 3 is notified. It may be.
  • step S26 the autonomous behavior robot 1 acquires state information (step S26).
  • the acquisition of the state information can be executed by the state information acquisition unit 24. In the illustrated flowchart, it is assumed that information as to whether the notification target person is sleeping or waking up is acquired as state information.
  • the autonomous behavior robot 1 determines whether the notification target person is sleeping (step S27). Whether or not the notification target person is sleeping can be determined by the state information acquisition unit 24. When it is determined that the notification target person is sleeping (step S27: YES), the autonomous behavior robot 1 executes a wake-up operation as a notification operation (step S28). The execution of the wake-up operation can be executed by the notification operation execution unit 26. The notification operation execution unit 26 executes the wake-up operation when the current time reaches a predetermined execution time. That is, when the autonomous behavior robot 1 arrives at the search place before the execution time, it waits until the execution time comes.
  • the autonomous behavior robot 1 immediately executes the wake-up operation.
  • the wake-up operation may be performed when the sleep state of the notification target person is determined by the state information acquisition unit 24 and the notification target person is in the predetermined sleep state, for example.
  • step S28 After executing the processing of step S28, the autonomous behavior robot 1 executes the processing of step S27 again to determine whether or not the notification target person is sleeping.
  • the re-execution of the process in step S27 may be executed after a certain time elapses, for example, 5 minutes, 10 minutes, or 15 minutes.
  • the autonomous behavior robot 1 performs a greeting operation (step S29), and performs the operation shown in the flowchart. finish.
  • the greeting operation is, for example, a fixed phrase such as "Good morning” or an audio output such as the current time. Whether to perform the greeting operation may be set in advance in the execution information.
  • FIG. 6 is a diagram illustrating an example of execution information according to the embodiment.
  • execution information 1000 has data items of “notification target person” and “notification operation”.
  • the execution information 1000 is set in the user terminal 3 and can be acquired by the execution information acquisition unit 163.
  • Notification target person is information for specifying a notification target person who performs a notification operation.
  • “A”, “B”, and “C” are specified in the ID of the “notification target”, but the number of notification targets is an arbitrary number of one or more.
  • the physical characteristics for specifying each notification target person may be registered in the autonomous behavior robot 1 in advance.
  • Notification operation is information on the notification operation to be performed on the notification target person.
  • a plurality of notification operations can be set for one notification target person.
  • the figure shows that two notification operations of “notification operation 1” and “notification operation 2” are set for each notification target person.
  • a priority may be set for each notification operation for each notification target person.
  • “Notification operation” has data items of “search location”, “notification operation”, and “time”.
  • “Search location” is information indicating a search location for searching for a notification target person. The figure shows a case where a room such as “child room”, “bedroom”, “western room”, or “living room” is designated as the search place. Information on each room is provided to the user terminal 3 from the visualization data providing unit 161 and can be shared between the autonomous robot 1 and the user terminal 3. The user can specify a search location by selecting a room from the map displayed on the user terminal 3.
  • Notification operation is information indicating a notification operation to be performed on a notification target person.
  • a notification operation such as “alarm operation” or “broadcast time notification” is set.
  • a responsibility level can be set for each notification operation.
  • the responsibility level is information indicating the importance of the notification operation performed by the autonomous behavior robot 1 on the notification target person.
  • the autonomous behavior robot 1 can change the notification operation according to the set responsibility level. For example, the autonomous behavior robot 1 performs, in accordance with the responsibility level, the execution order (priority) of the notification operation, the magnitude of the sound output in the notification operation, the content of the sound, the number of times the notification operation is executed, or the end condition of the notification operation. Etc. may be changed.
  • the autonomous behavior robot 1 when performing a plurality of notification operations, preferentially executes a notification operation having a high responsibility level.
  • the autonomous behavior robot 1 can make the notification target person recognize the notification operation with a high probability by increasing the voice output or increasing the number of executions in the notification operation having a high responsibility level.
  • the figure illustrates a case where three levels of “high level”, “medium level” and “low level” are set as the responsibility level.
  • the autonomous behavior robot 1 executes the “high level” notification operation prior to the “medium level” notification operation, and further executes the “medium level” notification operation prior to the “low level” notification operation. You may.
  • “Alarm operation” can be set to “with greeting” or “without greeting”. In FIG. 5, the operation in the case of “with greeting” has been described.
  • “Broadcast time notification” is an operation of notifying the start of a television broadcast or the like.
  • “Outing time notification” is an operation of notifying the notification target person of the leaving time.
  • Time is time information for executing the notification operation.
  • a time for the notification operation executed only once is set.
  • the time for the notification operation that is repeated every day may be set in “time”.
  • the execution information acquisition unit 163 acquires each piece of execution information individually and merges the pieces of execution information.
  • the notification operation may be scheduled in consideration of the priority order of the notification operations in a plurality of pieces of execution information, the possibility of execution due to interference in execution time, and the like.
  • FIG. 7 is a diagram illustrating an example of a method of setting execution information according to the embodiment.
  • an execution information setting screen 30 is displayed on the display screen of the user terminal 3.
  • the execution information setting screen 30 includes a notification target person setting unit 311, a time setting unit 312, a notification operation setting unit 313, and a search place setting unit 32.
  • the execution information setting screen 30 is displayed in, for example, an application program (app) of the user terminal 3.
  • the notification target person setting unit 311 is a pull-down menu for selecting a notification target person.
  • the figure shows that the notification target person A is selected as the notification target person.
  • the time setting unit 312 is a pull-down menu for selecting time information for executing the notification operation.
  • the notification operation setting section 313 is a pull-down menu for selecting a notification operation. The figure shows that the wake-up operation is selected as the notification operation.
  • the search place setting unit 32 displays, for example, a plan view of the arrangement of the rooms at home based on the visualization data provided from the visualization data providing unit 161 and sets a search place to search for a notification target person from the plan view. Is displayed.
  • the figure shows that the search position setting unit 32 displays a home position 321, a child room 322, a bedroom 323, and a Western room 324 to which the robot 2 returns for charging.
  • the user sets the selected place by touching at least one of the children's room 322, the bedroom 323, and the Western room 324 in the search place setting unit 32.
  • the search place setting unit 32 displays the moving route 325.
  • the movement route 325 is calculated by the search unit 25 and provided to the user terminal 3, for example.
  • the figure shows that the moving route 325 from the home position 321 to the child room 322 is indicated by a broken line.
  • the search location setting unit 32 may display that there is a problem with the movement when the movement is restricted on the movement route. For example, when there is a step at the entrance of the child room 322 and the robot 2 cannot move, the fact may be displayed on the movement route 325 with an X mark.
  • a program for realizing the functions constituting the apparatus described in the present embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read and executed by a computer system.
  • the “computer system” may include an OS and hardware such as peripheral devices.
  • the “computer system” also includes a homepage providing environment (or a display environment) if a WWW system is used.
  • the “computer-readable recording medium” includes a writable nonvolatile memory such as a flexible disk, a magneto-optical disk, a ROM, and a flash memory, a portable medium such as a CD-ROM, and a hard disk incorporated in a computer system. Storage device.
  • a “computer-readable recording medium” refers to a volatile memory (for example, a DRAM (Dynamic)) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random (Access @ Memory)), which includes a program that is held for a certain period of time. Further, the above program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • a volatile memory for example, a DRAM (Dynamic)
  • Random Access @ Memory
  • the "transmission medium” for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above.
  • a so-called difference file difference program
  • a location where the user terminal 3 exists may be set as a search location, or a search operation may be performed as the search location.
  • the location where the user terminal 3 is located can be grasped by, for example, the positional relationship between a base station for wireless LAN communication (not shown) and the user terminal 3. Further, the location where the user terminal 3 exists can be grasped by the radio field intensity of the short-range wireless communication between the robot 2 and the user terminal 3.
  • the application of the user terminal 3 may call the robot 2 at a predetermined time at which the notification operation is performed, and cause the notification operation execution unit 26 to execute the notification operation.
  • the application of the user terminal 3 may call the robot 2 at the set time of the timer to execute the notification operation. That is, the application of the user terminal 3 can link the general alarm function in the user terminal 3 with the robot 2.
  • the wake-up function includes a "snooze" function in which, once the alarm is stopped, the alarm sounds again at a predetermined number of repetitions after a predetermined time has elapsed.
  • the application of the user terminal 3 may call the robot 2 when the alarm is started by the snooze function and execute the notification operation. Further, the application of the user terminal 3 may call the robot 2 and execute the notification operation when the repetition number of the snooze ends.
  • the setting of the execution information may be performed by setting a marker.
  • a marker indicating a search place or a person to be notified may be set at the entrance of the room, so that the room where the marker is installed may be set as the search place, or the person to be notified may be set.
  • the person who wants the notification operation may set the marker indicating that the notification operation is desired on the door knob of the room, so that the robot 2 recognizes the marker and receives the notification operation by the robot 2.
  • a person who does not desire the notification operation may avoid the notification operation by the robot 2 by setting a marker (such as “Don't Disturb” display) indicating that the notification operation is not desired on the door knob of the room. Good.
  • the setting of the execution information may be performed by the implemented notification operation.
  • the execution information may be set by notifying the robot 2 that the notification target person has executed the notification operation, such as “wake up at the same time tomorrow”.
  • the setting of the execution information can be facilitated.
  • the robot 2 may execute a motion that dislikes the execution of the next notification operation.
  • An unpleasant act is an act different from a pleasant act, for example, an act of hitting the robot 2 or scolding the robot 2.
  • the robot 2 detects through a sensor or the like that the notification target person has performed an action such as apologizing to the robot 2 performing a motion that dislikes performing the notification operation, the robot 2 corrects the mood and notifies the robot 2 A motion that undertakes the execution of the operation may be executed. As described above, by reacting to the uncomfortable act, it is possible to deepen the communication with the robot 2.
  • the robot 2 may stop in front of the door and execute the notification operation when the execution time comes.
  • the notifying operation execution unit 26 is not limited to the case where the door is closed, and stays there when it is not possible to reach the search location due to an obstacle or a no-entry area. May be executed.
  • the notification operation execution unit 26 may execute the notification operation at a volume higher than the volume emitted when the search location can be reached. That is, the notification operation execution unit 26 performs the notification operation at a volume higher than usual so that the notification target person notices, considering that the user is away from the location specified as the search location.
  • the robot 2 may perform the notification operation at a position that does not collide with a door or a person by opening the door. Therefore, the notification operation execution unit 26 recognizes the open / close range of the door. Then, the notification operation execution unit 26 moves the robot 2 outside the range, and executes the notification operation when the robot 2 is located outside the opening / closing range of the door. Thereby, when the notification target person notices the notification and opens the door, the contact between the robot 2 and the door can be prevented. Also, the search unit 25 breaks the entry prohibition rule specified by the marker, moves through the entry prohibition range, and moves to the search place where the notification target person is located only when moving for the notification operation. You may.
  • the autonomous behavior robot 1 may acquire the execution information by voice input. An outline of a process of acquiring execution information by voice input will be described.
  • FIG. 8 is a diagram showing an example of a module configuration of the autonomous behavior robot 1 for acquiring execution information by voice input.
  • the autonomous behavior robot 1 includes a microphone 108, a voice recognition unit 201, an execution information acquisition unit 163, and an execution information storage unit 204.
  • the autonomous behavior robot 1 is configured as a system including the robot 2 and the data providing device 10 as described above.
  • FIG. 8 shows extracted functions related to “acquisition of execution information by voice input”. It is determined whether the functions of the microphone 108, the voice recognition unit 201, the execution information storage unit 204, and the execution information acquisition unit 163 are realized by the robot 2, the data providing device 10, or another device. It may be arbitrarily designed based on the first specification.
  • the microphone 108 inputs the voice of the user.
  • the microphone 108 may be provided in either the robot 2 or the data providing device 10. Moreover, the microphone 108 may be provided outside the autonomous behavior robot 1. For example, a microphone installed indoors where the robot 2 acts may be used. Alternatively, the microphone of the user terminal 3 may be used.
  • the execution information is input by voice, the user utters a linguistic expression such as a phrase or a sentence for specifying the execution information toward the microphone. In this example, it is assumed that the user who is a wife utters a command "Wake up the dad at bed time at 7 o'clock" in order to wake up the user who is a husband.
  • the voice recognition unit 201 recognizes voice data input by the microphone 108 and converts the voice data into language data.
  • the execution information acquisition unit 163 has an execution information identification unit 202 and a conversion rule storage unit 203.
  • the execution information specifying unit 202 specifies the execution information based on the language data by referring to the conversion rules stored in the conversion rule storage unit 203.
  • the conversion rule associates a parameter of execution information with a language expression.
  • the conversion rule storage unit 203 specifically stores an operation expression corresponding to the type of notification operation, a person expression corresponding to the notification target person ID, and a location expression corresponding to an indoor place such as a child room or a bedroom. I have. For example, the motion expression “wake up” is associated with the notification operation “alarm / greeting”, the person expression “dad” is associated with the notification target ID “A”, and the place “bedroom” is “bed”. A location expression is associated.
  • the conversion rule may associate a time expression corresponding to the time. For example, a time expression “lunch” may be associated with time “12:00”.
  • the execution information specifying unit 202 specifies the type of the notification operation corresponding to the operation expression. Further, when the language data includes a person expression corresponding to the notification target person ID, the execution information specifying unit 202 specifies the notification target person ID corresponding to the person expression. Furthermore, when the language data includes a place expression corresponding to an indoor place, the execution information specifying unit 202 specifies an indoor place corresponding to the place expression. The specified indoor place corresponds to a search place. The execution information specifying unit 202 specifies the time when the language data includes a time expression such as “7 o'clock”, and specifies the time when the time expression corresponding to the time is included. Specifies the time corresponding to the time expression.
  • the specified time corresponds to the notification time.
  • the execution information specifying unit 202 specifies execution information including a notification target person ID, a search place, a notification operation type, and a notification time. In this example, the execution information including the notification target person ID “A”, the search place “bedroom”, the notification operation type “alarm / greeting”, and the notification time “7:00” is specified.
  • the execution information storage unit 204 stores the specified execution information.
  • the autonomous behavior robot 1 acquires execution information by voice input.
  • the functions of all the modules shown in FIG. That is, the user's voice is input through the microphone 108 provided in the data providing apparatus 10, and the data providing apparatus 10 performs a process of voice recognition and a process of specifying execution information.
  • the data providing apparatus 10 includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information storage unit 204.
  • the voice recognition unit 201 of the data providing device 10 receives voice data from the microphone 108 of the data providing device 10 and converts the voice data into language data.
  • the execution information specifying unit 202 of the data providing apparatus 10 specifies execution information based on language data with reference to the conversion rule storage unit 203 of the data providing apparatus 10.
  • the execution information storage unit 204 of the data providing device 10 stores the specified execution information.
  • the execution information specifying unit 202 determines the location of the microphone 108 that has input the voice, that is, the location where the data providing apparatus 10 is installed.
  • the location information in the execution information may be used.
  • the function of the module shown in FIG. 8 is realized by the attachment type device 4 and the data providing device 10 attached indoors where the robot 2 acts. That is, the user's voice is input through the microphone 108 included in the attached device 4, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information. For this purpose, audio data is transmitted from the attached device 4 to the data providing device 10.
  • the one or more attachable devices 4 are identified by IDs, and the data providing apparatus 10 pre-registers the installation location (indoor position coordinates) of each attachable device 4.
  • a plurality of attachment type devices 4 may be attached to different places.
  • the attached device 4 can transmit data to and from the data providing device 10 by one or both of wired communication and wireless communication.
  • the method of wireless communication may be, for example, short-range wireless communication such as wireless LAN, Bluetooth (registered trademark), or infrared communication, or wired communication.
  • the wired communication system may be, for example, a wired LAN.
  • the attachment type device 4 has a microphone 108 and an audio data transmission unit (not shown).
  • the audio data transmission unit transmits the audio data input by the microphone 108 of the attachment type device 4 to the data providing device 10.
  • the data providing device 10 includes a voice data receiving unit (not shown), a voice recognition unit 201, an execution information specifying unit 202, a conversion rule storage unit 203, and an execution information storage unit 204.
  • the audio data receiving unit receives the audio data sent from the attached device 4.
  • the voice recognition unit 201 of the data providing device 10 converts the voice data received by the voice data receiving unit into language data.
  • the execution information specifying unit 202 of the data providing device 10 specifies the execution information based on the language data with reference to the conversion rule storage unit 203. 204 of the data providing apparatus 10 stores the specified execution information.
  • the execution information acquisition unit 163 determines the location of the microphone 108 that has input the voice, that is, the location where the attachable device 4 is attached.
  • the location information in the execution information may be used. For example, when a voice saying “Please wake up after 30 minutes” is input from the microphone 108 attached to the bedroom, the bedroom may be used as the location information in the execution information.
  • This example sentence assumes that the user is alone and the notification target person ID can be omitted.
  • the execution information specifying unit 202 specifies the time to be notified by adding 30 minutes to the current time based on the time expression “after 30 minutes”. Further, it is assumed that the execution information specifying unit 202 specifies the type of the notification operation “no alarm / no greeting” by the operation expression “Please wake up”.
  • FIG. 9 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 relating to the specification of the mounting location of the mounting type device 4.
  • FIG. 9 shows a module configuration example of the data providing apparatus 10 corresponding to the second implementation example, particularly, the location specifying function of the attached device 4 (microphone 108) that picked up the voice.
  • the execution information acquisition unit 163 of the data providing device 10 includes a location identification unit 211 and an attachment location storage unit 212.
  • the attachment location storage unit 212 stores the attachment location of each attachment type device 4 attached to a predetermined indoor location. That is, the attachment location storage unit 212 stores the attachment location in association with the ID of each attachment type device 4.
  • the location specifying unit 211 determines whether or not the execution information includes the location information. When the location information is included in the execution information, the location specifying unit 211 ends the processing as it is. If the location information is not included in the execution information, the location specifying unit 211 obtains the ID of the attached device 4 that is the transmission source of the audio data from the audio data receiving unit. Then, the location specifying unit 211 refers to the mounting location storage unit 212 to specify a mounting location corresponding to the ID of the source mountable device 4, and stores the specified mounting location as a search location in the execution information storage unit 204. Write. That is, the attachment location of the attachment type device 4 is stored as the location information in the execution information.
  • the functions of the module shown in FIG. 8 are realized by the user terminal 3 and the data providing device 10.
  • the user terminal 3 here may be any computer terminal such as a smartphone or a laptop PC.
  • the microphone 108 built in the user terminal 3 is used. The user's voice is input through the microphone 108 of the user terminal 3, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information. For that purpose, voice data is transmitted from the user terminal 3 to the data providing device 10.
  • the user terminal 3 in the fourth implementation example has the microphone 108 and a voice data transmission unit (not shown).
  • the audio data transmission unit transmits the audio data input by the microphone 108 of the user terminal 3 to the data providing device 10.
  • the module configuration of the data providing device 10 is the same as in the second implementation example. That is, the third implementation example is different from the second implementation example in that the user terminal 3 is used as a movable general-purpose product, instead of the fixed type and the attached device 4 as a dedicated product.
  • the user terminal 3 and the data providing device 10 also realize the functions of the module shown in FIG.
  • a user's voice is input to the user terminal 3 and a voice recognition process is performed.
  • the data providing device 10 performs a process of specifying execution information.
  • language data is transmitted from the user terminal 3 to the data providing device 10.
  • the fourth implementation example differs from the third implementation example in that the user terminal 3 performs not only speech acquisition but also speech recognition.
  • the user terminal 3 in the fourth implementation example has the microphone 108, the voice recognition unit 201, and the language data transmission unit (not shown).
  • the voice recognition unit 201 of the user terminal 3 converts voice data input by the microphone 108 of the user terminal 3 into language data.
  • the language data transmitting unit transmits the converted language data to the data providing device 10.
  • the data providing device 10 includes a language data receiving unit (not shown), an execution information specifying unit 202, a conversion rule storage unit 203, and an execution information storage unit 204.
  • the language data receiving unit receives the language data sent from the user terminal 3.
  • the execution information specifying unit 202 of the data providing device 10 refers to the conversion rule storage unit 203 of the data providing device 10 and specifies execution information based on the received language data.
  • the execution information storage unit 204 of the data providing device 10 stores the specified execution information.
  • the user terminal 3 and the data providing device 10 also realize the functions of the module shown in FIG.
  • a user's voice is input to the user terminal 3, and a process of voice recognition and a process of specifying execution information are performed. Then, the execution information is transmitted from the user terminal 3 to the data providing device 10.
  • the fifth implementation example is different from the fourth implementation example in that the user terminal 3 also specifies execution information in addition to voice acquisition and voice recognition.
  • the user terminal 3 in the fifth implementation example includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information transmission unit (not shown).
  • the voice recognition unit 201 of the user terminal 3 converts voice data input by the microphone 108 of the user terminal 3 into language data.
  • the execution information specifying unit 202 refers to the conversion rule storage unit 203 and specifies execution information based on the converted language data.
  • the execution information transmitting unit transmits the specified execution information to the data providing device 10.
  • the data providing device 10 includes an execution information receiving unit (not shown) and an execution information storage unit 204.
  • the execution information receiving unit receives the execution information sent from the user terminal 3.
  • the execution information storage unit 204 of the data providing device 10 stores the received execution information.
  • the execution information acquisition unit 163 determines the location of the microphone 108 that has input the voice, that is, the user terminal.
  • the location 3 may be used as the location information in the execution information.
  • the third to fifth implementation examples are common in that the processing is started with the voice data acquired by the user terminal 3 as a starting point.
  • FIG. 10 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 relating to the specification of the location of the user terminal 3.
  • FIG. 10 illustrates an example of a module configuration related to the user terminal 3 and the data providing device 10 corresponding to the third to sixth implementation examples, and in particular, the location configuration function of the user terminal 3 (the movable microphone 108).
  • the user terminal 3 includes a position measuring unit 221 and a terminal position transmitting unit 222.
  • the position measuring unit 221 measures the current position of the user terminal 3 based on, for example, beacon signals transmitted from a plurality of beacon transmitters installed at predetermined indoor locations.
  • the user terminal 3 includes a beacon receiver, receives a beacon signal transmitted by a beacon transmitter installed at a predetermined position, and specifies the ID of the beacon transmitter.
  • the position measuring unit 221 specifies the current position of the user terminal 3 by analyzing the relationship between the radio wave intensity of the received beacon signal and the interval between the beacon transmitter specified by the ID and the user terminal 3.
  • the beacon transmitter may be included in the attached device 4 or may be provided separately from the attached device 4.
  • the terminal position transmitting unit 222 transmits the current position of the user terminal 3 to the data providing device 10.
  • the terminal position transmission unit 222 transmits the current position of the user terminal 3 before or after transmission of audio data in the audio data transmission unit, for example.
  • the terminal position transmitting unit 222 transmits, for example, the current position of the user terminal 3 before or after the transmission of the language data in the language data transmitting unit.
  • the terminal position transmission unit 222 transmits the current position of the user terminal 3 before or after transmission of the execution information in the execution information transmission unit, for example.
  • the data providing device 10 includes a terminal position receiving unit 223, a location specifying unit 224, a floor plan data storage unit 225, and an execution information storage unit 204.
  • the location specifying unit 224 and the floor plan data storage unit 225 may be included in the execution information acquisition unit 163.
  • the floor plan data storage unit 225 stores a range of a place for each indoor place.
  • the place referred to here is an indoor area such as a children's room or a bedroom.
  • the terminal position receiving unit 223 receives the current position of the user terminal 3 from the user terminal 3. After the execution information is specified by the execution information specifying unit 202, the location specifying unit 224 determines whether or not the execution information includes the location information.
  • the location specifying unit 224 ends the processing. If the location information is not included in the execution information, the location specifying unit 224 obtains the current location of the user terminal 3 from the terminal location receiving unit 223. Then, the location specifying unit 224 refers to the floor plan data storage unit 225 and writes the location including the current position of the user terminal 3 as the location information in the execution information stored in the execution information storage unit 204. Since the location information in the execution information indicates the search location, it means that the search is performed at a location including the current position of the robot 2. This is the end of the description of FIG.
  • the functions of the module shown in FIG. 8 are realized by the robot 2 and the data providing device 10.
  • the voice of the user is input to the microphone 108 of the robot 2, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information.
  • voice data is transmitted from the robot 2 to the data providing device 10.
  • the robot 2 has a microphone 108 and a voice data transmission unit (not shown).
  • the audio data transmission unit transmits the audio data input by the microphone 108 of the robot 2 to the data providing device 10.
  • the data providing device 10 is the same as in the second implementation example.
  • the robot 2 and the data providing device 10 also implement the functions of the module shown in FIG.
  • the robot 2 inputs a user's voice and performs voice recognition processing.
  • the data providing device 10 performs a process of specifying execution information.
  • language data is transmitted from the robot 2 to the data providing device 10.
  • Robot 2 has microphone 108, voice recognition unit 201, and language data transmission unit (not shown).
  • the voice recognition unit 201 of the robot 2 converts voice data input by the microphone 108 of the robot 2 into language data.
  • the language data transmitting unit transmits the converted language data to the data providing device 10.
  • the data providing device 10 is the same as in the case of the fourth implementation example.
  • the functions of the modules shown in FIG. 8 are realized by the robot 2 data providing device 10.
  • the robot 2 inputs a user's voice, and performs voice recognition processing and execution information identification processing. Then, the execution information is transmitted from the robot 2 to the data providing device 10.
  • the robot 2 includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information transmission unit (not shown).
  • the voice recognition unit 201 of the robot 2 converts voice data input by the microphone 108 of the robot 2 into language data.
  • the execution information specifying unit 202 refers to the conversion rule storage unit 203 and specifies execution information based on the converted language data.
  • the execution information transmitting unit transmits the specified execution information to the data providing device 10.
  • the data providing device 10 is the same as in the fifth implementation example.
  • the execution information acquisition unit 163 determines the location of the microphone 108 to which the voice is input, that is, the robot 2 A certain place may be set as the place information in the execution information.
  • the sixth to eighth implementation examples are common in that the processing is started with the voice data acquired by the robot 2 as a starting point.
  • FIG. 11 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 for specifying the location of the robot 2.
  • FIG. 11 shows an example of a module configuration relating to the robot 2 and the data providing device 10 corresponding to the sixth to eighth implementation examples, and particularly to the location specifying function of the robot 2 (movable microphone 108).
  • the robot 2 has a movement control unit 23 and a robot position transmission unit 231.
  • the robot position transmitting unit 231 acquires the current position of the robot 2 from the movement control unit 23, and transmits the current position of the robot 2 to the data providing device 10.
  • the movement control unit 23 may measure the current position based on a radio wave received from a wireless communication device installed at a predetermined position, or may measure the current position based on a captured image.
  • the movement control unit 23 measures the current position of the robot 2 based on, for example, beacon signals transmitted from a plurality of beacon transmitters installed at predetermined positions indoors.
  • the robot 2 includes a beacon receiver, receives a beacon signal transmitted by a beacon transmitter installed at a predetermined position, and specifies the ID of the beacon transmitter.
  • the movement control unit 23 specifies the current position of the robot 2 by analyzing the relationship between the radio wave intensity of the received beacon signal and the interval between the beacon transmitter specified by the ID and the robot 2.
  • the movement control unit 23 may specify the current position by using a SLAM (Simultaneous Localization and Mapping) technique that simultaneously estimates its own position and creates an environment map.
  • SLAM Simultaneous Localization and Mapping
  • the robot position transmitting unit 231 transmits, for example, the current position of the robot 2 before or after transmitting the voice data in the voice data transmitting unit.
  • the robot position transmitting unit 231 transmits the current position of the robot 2 before or after the transmission of the language data in the language data transmitting unit, for example.
  • the robot position transmission unit 231 transmits the current position of the robot 2 before or after transmission of the execution information in the execution information transmission unit, for example.
  • the data providing apparatus 10 includes a robot position receiving unit 232 in addition to the place specifying unit 224 and the floor plan data storage unit 225 described above.
  • the robot position receiving unit 232 receives the current position of the robot 2 from the robot 2.
  • the location specifying unit 224 determines whether or not the execution information includes the location information. When the location information is included in the execution information, the location specifying unit 224 ends the processing. When the location information is not included in the execution information, the location specifying unit 224 obtains the current position of the robot 2 from the terminal position receiving unit 223.
  • the location specifying unit 224 refers to the floor plan data storage unit 225 and writes a location including the current position of the robot 2 as location information in the execution information stored in the execution information storage unit 204. Since the location information in the execution information indicates the search location, it means that the search is performed at a location including the current position of the robot 2.
  • the notifying operation execution unit 26 determines the time when the execution information is obtained when the execution information obtained in each of the first to eighth implementation examples does not include time information related to the time when the notification operation is executed.
  • the notification operation may be executed immediately.
  • the notification operation execution unit 26 performs the predetermined time (for example, 1 to 1) from the time when the execution information is acquired.
  • the notification operation may be executed at the point in time when the minutes have elapsed.
  • the predetermined time is such a length as to be assumed that the character of the robot 2 understands and reacts to the instruction from the user and that the character takes the understanding and the reaction. This concludes the description of acquiring execution information by voice input.
  • the robot 2 may capture the image of the notification target person and provide the captured image from the data providing device 10 to the user terminal 3. Furthermore, the robot 2 may acquire the status information of the notification target person after executing the notification operation, and provide the status information from the data providing device 10 to the user terminal 3.
  • the robot 2 when the robot 2 performs a wake-up operation with the husband user as a notification target person, the robot 2 captures the husband user with the camera of the robot 2 following the wake-up operation.
  • the robot 2 performs the actual shooting after confirming that the user is included in the preview image obtained from the camera. Therefore, the photographed image recorded in the camera of the robot 2 by the actual photographing shows the figure of the user who is the husband after the wake-up operation.
  • the captured image may be a still image or a moving image.
  • the captured image is sent to the data providing device 10 and stored.
  • the application of the user terminal 3 sends a request for the captured image after the wake-up operation to the data providing apparatus 10.
  • the data providing device 10 sends the stored captured image to the application of the user terminal 3 in response to the request.
  • the application of the user terminal 3 displays the received captured image on the display device of the user terminal 3. In this way, the wife user can check whether or not the husband user has woken up after the robot 2 performs the wake-up operation, by using the captured image displayed on the display device of the user terminal 3. .
  • FIG. 12 is a diagram illustrating an example of a module configuration of the autonomous behavior robot 1 that provides the captured image and the state information after performing the notification operation.
  • FIG. 12 shows an example of a module configuration related to the robot 2 and the data providing apparatus 10, particularly, an information providing function.
  • the robot 2 has a photographing unit 21 and a photographed image transmitting unit 241.
  • the imaging unit 21 performs imaging with the notification target person as a subject, and generates a captured image after performing the notification operation.
  • the imaging unit 21 performs the actual imaging after confirming that the face of the notification target person is captured in the live view image by, for example, face recognition processing.
  • the photographed image transmitting unit 241 transmits the photographed image after performing the notification operation to the data providing apparatus 10.
  • the data providing apparatus 10 includes a captured image receiving unit 242, a captured image storage unit 243, and a captured image providing unit 244.
  • the captured image receiving unit 242 receives the captured image sent from the robot 2, and the captured image storage unit 243 stores the received captured image.
  • the photographed image providing unit 244 reads out the photographed image stored in the photographed image storage unit 243 and requests the photographed image after executing the notification operation from the application of the user terminal 3. To the application. In this way, in the application of the user terminal 3, it is possible to browse the captured image in which the notification target person after executing the notification operation is captured. Further, in the data providing apparatus 10, the captured image after the notification operation is performed can be stored as a record of the reaction of the notification target person.
  • the robot 2 performs a wake-up operation with the husband user as a notification target
  • the robot 2 acquires state information indicating that the husband user is sleeping or waking up following the wake-up operation. .
  • the status information is sent to the data providing device 10 and stored.
  • the application of the user terminal 3 sends a request for the state information after the wake-up operation to the data providing apparatus 10.
  • the data providing device 10 sends the stored state information to the application of the user terminal 3 in response to the request.
  • the application of the user terminal 3 causes the display device of the user terminal 3 to display a message indicating whether the husband user is sleeping or awake according to the received state information. If the status information is sleeping, for example, a message “Dad is still sleeping.” Is displayed. If the status information is waking up, for example, a message “Dad is already up” is displayed. In this way, the wife user can confirm whether or not the husband user has woken up after the robot 2 has performed the wake-up operation, by a message displayed on the display device of the user terminal 3.
  • the robot 2 has a state information acquisition unit 24 and a state information transmission unit 251. After the robot 2 performs the notification operation, the state information acquisition unit 24 acquires state information relating to the state of the notification target person. As described above, the status information indicates, for example, during sleep or waking up.
  • the state information transmitting unit 251 transmits the state information after executing the notification operation to the data providing apparatus 10.
  • the data providing device 10 includes a status information receiving unit 252, a status information storage unit 253, and a status information providing unit 254.
  • the state information receiving unit 252 receives the state information sent from the robot 2, and the state information storage unit 253 stores the received state information.
  • the status information providing unit 254 reads the status information stored in the status information storage unit 253 when the status information of the notification target person after executing the notification operation is requested from the application of the user terminal 3, It is transmitted to the application of the user terminal 3. In this way, in the application of the user terminal 3, the state information on the state of the notification target person after executing the notification operation can be browsed.
  • the user of the user terminal 3 who is a wife can check whether or not the notification target person who is a husband has woken up after the robot 2 has performed the wake-up operation by the state information displayed on the display device of the user terminal 3. .
  • the state information after executing the notification operation can be stored as a record of the reaction of the notification target person.
  • the captured image providing unit 244 and the state information providing unit 254 may transmit the captured image and the state information to the user terminal 3 together.
  • the application of the user terminal 3 requests the data providing apparatus 10 for the captured image and the state information after the wake-up operation. Is sent.
  • the data providing device 10 sends the stored captured image and status information to the application of the user terminal 3.
  • the application of the user terminal 3 causes the display device of the user terminal 3 to display a message indicating whether the husband user is sleeping or waking up based on the received state information, together with the captured image.
  • a live mode may be provided in which the current captured image and status information are automatically sent from the data providing device 10 to the application of the user terminal 3 so that the user terminal 3 can immediately view the image.
  • the live mode is set by a user operation in an application of the user terminal 3.
  • a live mode setting instruction is sent to the data providing apparatus 10.
  • the data providing apparatus 10 receives the live mode setting instruction, it immediately transmits the captured image. That is, the captured image providing unit 244 immediately transmits the captured image received from the robot 2 and stored in the captured image storage unit 243 to the application of the user terminal 3.
  • the application of the user terminal 3 causes the received captured image to be immediately displayed on the display device of the user terminal 3.
  • the state information providing unit 254 may transmit the state information received from the robot 2 and stored in the state information storage unit 253 to the application of the user terminal 3 immediately.
  • the application of the user terminal 3 may cause the display device of the user terminal 3 to immediately display the received message indicating the status information.
  • the captured image providing unit 244 may stop transmitting the captured image when the state information of the notification target switches from sleeping to waking up.
  • the status information providing unit 254 may stop transmitting the status information. In this way, unnecessary data transmission processing is omitted.
  • the imaging unit 21 may change the conditions for imaging the notification target person according to the intimacy between the notification target person and the robot.
  • a positive correlation may be provided between the intimacy between the notification target person and the robot and the shooting time. That is, the shooting time of the moving image may be set such that the higher the intimacy between the notification target person and the robot, the longer the shooting time of the moving image, and similarly, the lower the intimacy, the shorter the shooting time of the moving image.
  • a negative correlation may be provided between the intimacy between the notification target person and the robot and the shooting distance. That is, the shooting distance may be set such that the higher the intimacy between the notification target person and the robot, the shorter the shooting distance, and the lower the intimacy, the longer the shooting distance.
  • the movement control unit 23 obtains a position where the distance between the robot 2 and the notification target person is the shooting distance, and controls the movement mechanism 29 so that the robot 2 moves to that position.
  • the predetermined familiarity condition may be a condition that the familiarity is equal to or higher than a reference value.
  • the predetermined intimacy condition may be a condition that the intimacy level is less than a reference. Further, the condition may include a relationship between the intimacy between the other robot 2 and the notification target person.
  • the familiarity condition may be that the familiarity A between the robot 2a and the subject to be notified is a value obtained by multiplying the familiarity B between the other robot 2b and the subject to be notified by a predetermined ratio. That is, a result of comparing the intimacy with which some or all of the plurality of robots 2 are weighted may be used as the intimacy condition.
  • FIG. 13 is a diagram showing a first specific example of a notification operation in which two robots 2 cooperate.
  • the robot 2a that has found the notification target notifies the robot 2b of the discovery location, and the notified robot 2b moves to the discovery location and performs a notification operation. That is, when the robot 2a performs the notification operation, the robot 2b is called in, and the called robot 2b also performs an effect to perform the notification operation together with the robot 2a.
  • the robot 2a when the robot 2a discovers the notification target person sleeping in the bedroom and tries to wake up the notification target person, the robot 2b detects that the notification target person is found in the bedroom. Notice. Thereafter, the robot 2a starts a wake-up operation for the notification target person.
  • the robot 2b receives the notification that the notification target person has been found in the bedroom, the robot 2b starts moving to the bedroom.
  • the robot 2b starts an alarming operation for the notification target together with the robot 2a. Therefore, the number of the robots 2 performing the wake-up operation increases from one to two, and the effect of wake-up on the notification target person increases from the middle. Therefore, even if the notification target person falls asleep deeply and does not easily wake up, the notification target person can be easily woken up.
  • the robot 2a has a discovery location notification transmission unit (not shown) for transmitting a notification of the location where the notification target is found, that is, a notification of the discovery location, to the robot 2b, and the robot 2b transmits a notification of the discovery location from the robot 2a. It has a discovery location notification receiving unit (not shown) for receiving.
  • step S31 as described in step S21 of FIG. 5 and the section of "acquisition of execution information by voice input"
  • step S32 step S22 of FIG.
  • the search unit 25 of the robot 2a calculates the movement route to the search location.
  • step S33 as described in step S23 of FIG. 5, the search unit 25 of the robot 2a starts searching for a notification target person, and the movement control unit 23 of the robot 2a controls the movement mechanism 29 to search for the search location. Start moving to.
  • step S34 as described in step S24 of FIG. 5, the movement control unit 23 of the robot 2a determines that the robot 2a has reached the search location, and further in step S35, as described in step S25 of FIG.
  • the search unit 25 of the robot 2a finds the notification target person
  • the discovery location notification transmission unit of the robot 2a transmits a notification of the discovery location to the robot 2b (step S36).
  • step S37 the notification operation execution unit 26 of the robot 2a performs the notification operation as exemplified in steps S27 to S29 of FIG.
  • the search unit 25 of the robot 2b calculates a moving route to the discovery location (step S41). Then, the movement control unit 23 of the robot 2b controls the movement mechanism 29 to start moving to the discovery location (Step S42). When the movement control unit 23 of the robot 2b determines that the robot 2b has reached the discovery location (step S43), the notification operation execution unit 26 of the robot 2b performs a notification operation (step S44).
  • the robot 2b may perform the search simultaneously with the robot 2a. Then, when the robot 2b first finds the notification target person, the robot 2b transmits a discovery location notification to the robot 2a, and the robot 2a that has received the discovery location notification moves to the discovery location, contrary to the example of FIG. Then, the notification operation may be performed.
  • the robot 2b may further include a discovery location notification transmission unit, and the robot 2a may further include a discovery location notification reception unit.
  • the discovery location notification may be transmitted via the data providing device 10.
  • the discovery location notification transmitting unit of the robot 2a transmits a discovery location notification to the data providing device 10, and the discovery location notification transferring unit (not shown) of the data providing device 10 receives and receives the discovery location approach notification.
  • the discovery location notification may be transferred to the robot 2b.
  • the discovery location notification receiving unit of the robot 2b may receive the transferred discovery location notification.
  • FIG. 14 is a diagram showing a second specific example of the notification operation in which two robots 2 cooperate.
  • the robots 2a and 2b perform the notification operation in synchronization.
  • approaching the notification target person to a distance equal to or smaller than a predetermined reference is expressed as “approaching the notification target person”.
  • the robot 2a that has approached the notification target first waits until the robot 2b approaches the notification target.
  • the robots 2a and 2b perform the notification operation at the same time.
  • the robot 2a finds a notification target person sleeping in the bedroom, it approaches the notification target person. Then, the robot 2a notifies the robot 2b that it has approached the notification target person. The robot 2a waits without performing an alarming operation until the robot 2b approaches the notification target person. On the other hand, the robot 2b also finds the notification target person sleeping in the bedroom and approaches the notification target person. Then, when both the robots 2a and 2b approach the notification target, the robots 2a and 2b simultaneously start an alarming operation on the notification target. Therefore, the alarming effect is strong, and the notification target person can be easily caused to wake up. In addition, it is possible to produce a situation in which the robots 2a and 2b are aligned and play mischief.
  • the robot 2a receives an approach notification from the other robot 2b (not shown) and an approach notification transmitting unit (not shown) that sends a notification (hereinafter, referred to as “approach notification”) of the approach to the notification target person to the other robot 2b. And an approach notification receiving unit (not shown). Similarly, the robot 2b also has an approach notification transmitting unit and an approach notification receiving unit.
  • the processing of the robot 2a shown in steps S51 to S54 of FIG. 14 is the same as the processing of the robot 2a shown in steps S31 to S34 of FIG.
  • the search unit 25 of the robot 2a finds the notification target person and determines that the notification target person is approached (step S55)
  • the approach notification transmitting unit transmits an approach notification to the robot 2b (step S56). Then, the robot 2a waits until receiving the approach notification from the robot 2b.
  • the processing of the robot 2b shown in steps S61 to S63 of FIG. 14 is the same as the processing of the robot 2a shown in steps S31 to S33 of FIG. It is assumed that the robot 2b receives an approach notification from the robot 2a before arriving at the search location. Therefore, after the approach notification receiving unit of the robot 2b receives the approach notification from the robot 2a, the movement control unit 23 of the robot 2b determines that the robot 2a has reached the search location (step S64). Further, when the search unit 25 of the robot 2b finds the notification target person and determines that the notification target person is approached (step S65), the approach notification transmission unit of the robot 2b transmits an approach notification to the robot 2a (step S66). .
  • the notification operation execution unit 26 of the robot 2b determines that the predetermined cooperation condition is satisfied, and performs the notification operation.
  • the predetermined cooperation condition is that the approach notification is received from the other robot 2a and that the own robot 2b approaches the notification target.
  • the notification operation execution unit 26 of the robot 2a determines that the predetermined cooperation condition is satisfied, and performs the notification operation.
  • the predetermined cooperation condition is that an approach notification is received from the other robot 2b, and that the own robot 2a approaches the notification target person.
  • the approach notification may be transmitted via the data providing device 10.
  • the approach notification transmitting unit of the robot 2a transmits an approach notification to the data providing device 10
  • the approach notification transferring unit (not shown) of the data providing device 10 receives the approach notification and transmits the received approach notification to the robot 2b. May be forwarded to
  • the approach notification receiving unit of the robot 2b may receive the transferred approach notification.
  • FIG. 15 is a diagram showing a third specific example of the notification operation in which two robots 2 cooperate.
  • the timing of the notification operation by the robot 2a is shifted from the timing of the notification operation by the robot 2b. Therefore, the timings of the notification operations by the two robots 2a and 2b do not overlap. For example, since the voice output of the robot 2a and the voice output of the robot 2b are not performed at the same time, it is not noisy and the content of the voice can be easily heard.
  • the third specific example relates to the control of the stage where both the robots 2a and 2b perform the notification operation. That is, in the first specific example, the notification operation shown in step S44 of FIG. 13 is started, or in the second specific example, the notification operation shown in step S57 of FIG. 14 and the notification operation shown in step S67 are started. It relates to control after the timing.
  • the robot 2a receives an operation notification from the other robot 2b, and an operation notification transmitting unit (not shown) that transmits a notification (hereinafter, referred to as “operation notification”) that the robot 2a has performed the notification operation to the other robot 2b. And an operation notification receiving unit (not shown).
  • the robot 2b has an operation notification transmitting unit and an operation notification receiving unit.
  • the robot 2a first performs the notification operation, and then performs the notification operation twice each while replacing the robot 2b.
  • the notification operation execution unit 26 of the robot 2a performs a notification operation (step S71).
  • the operation notification transmission unit of the robot 2a transmits an operation notification to the robot 2b (Step S72). Then, the robot 2a waits until an operation notification is received from the robot 2b.
  • the robot 2b waits for a predetermined time and the notification operation executing section 26 of the robot 2b performs the notification operation (step S82).
  • the predetermined waiting time is an interval (for example, 1 second to 5 seconds) that gives an impression that the robots 2a and 2b repeat the notification while observing the state of the notification target person without folding.
  • the operation notification transmitting unit of the robot 2b transmits an operation notification to the robot 2a (Step S83). Then, the robot 2b waits until receiving the operation notification from the robot 2a.
  • step S73 when the operation notification receiving unit of the robot 2a receives the operation notification from the robot 2b (step S73), the operation waits for a predetermined time, and the notification operation execution unit 26 of the robot 2a performs the notification operation (step S74).
  • the operation notification transmitting unit of the robot 2a transmits an operation notification to the robot 2b (Step S75). Then, the robot 2a ends the entire process.
  • both the robots 2a and 2b may end the process of the third specific example at the timing when it is determined that the state information of the notification target person is awake.
  • the second wake-up operation by 2b is not performed. That is, an unnecessary alarm operation is not performed on the notification target person who has already woken up.
  • ⁇ Notification operation by message reception> When the autonomous behavior robot 1 receives a message addressed to a user, a notification operation may be performed with the user as a notification target.
  • a notification operation may be performed with the user as a notification target.
  • the parent user terminal 3 sends an e-mail with the message "You can eat cake in the refrigerator" in the text to the user who is a child.
  • the robot 2 can read out a message to a child who is in the robot.
  • the child's e-mail address and the child's search location “child room” are set in advance as the child user information.
  • the e-mail address of the child is set as the destination by the parent, and when the user terminal 3 transmits an e-mail in which the message "Eat cake in the refrigerator.” Is sent, the autonomous robot 1 This e-mail is received at.
  • the autonomous behavior robot 1 determines that the notification target is a child from the destination e-mail address, and the robot 2 moves to the “child room” where the child is to be searched and searches for the child.
  • the robot 2 finds a child, the robot 2 reads out the text of the e-mail address, "You can eat the cake in the refrigerator.”
  • the notification target person is specified based on the email address, and the search location of the notification target person is specified based on the user information of the notification target person. Then, the main body of the e-mail is read out as a notification operation.
  • the e-mail is sent to the autonomous robot 1. Received.
  • the autonomous robot 1 determines from the e-mail address of the destination that the person to be notified is her husband, and specifies the search location “bedroom” corresponding to the location expression “@bed” from the text of the e-mail address. Further, the autonomous behavior robot 1 specifies a notification operation “alarm operation” for the husband based on the user information of the husband. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, the robot 2 performs a notification operation “alarm operation” to the husband.
  • the notification target person is specified based on the e-mail address
  • the search location is specified from the body of the e-mail
  • the notification operation for the notification target person is specified based on the user information of the notification target person. Is done.
  • the autonomous robot 1 sends this e-mail. Email is received.
  • the autonomous behavior robot 1 determines that the notification target person is the husband from the destination e-mail address, and specifies the search location “bedroom” corresponding to the location expression “@bed” in the body of the e-mail address. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, it reads out the text "please call” except for the location expression "$ bed” in the body of the e-mail address.
  • the notification target person is specified based on the e-mail address, and the search location is specified from the body of the e-mail. Then, the main body of the e-mail is read out as a notification operation.
  • the robot 2 can perform a predetermined notification operation “alarm operation” for the husband in the predetermined search location “bedroom”. it can.
  • the text of the e-mail is arbitrary.
  • the body of the email may be empty.
  • an e-mail address of the husband, a search location "bedroom" of the husband, and a notification operation "wake-up operation” for the husband are set in advance as the user information of the husband.
  • the autonomous behavior robot 1 determines that the person to be notified is her husband from the destination e-mail address. Further, the autonomous behavior robot 1 specifies a search location “bedroom” of the husband and a notification operation “alarm operation” for the husband based on the user information of the husband. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, the robot 2 performs a notification operation “alarm operation” to the husband.
  • the notification target person is specified based on the electronic mail address, and the search location of the notification target person and the notification operation for the notification target person are specified based on the user information of the notification target person.
  • the message transmission method is not limited.
  • the message may be transmitted by a method other than e-mail.
  • the message may be transmitted by a message exchange application.
  • the user information is set by, for example, an application of the user terminal 3.
  • the set user information is transmitted to the data providing device 10 by a user information transmitting unit (not shown) in the application of the user terminal 3.
  • the data providing device 10 includes a user information receiving unit (not shown), a user information storage unit (not shown), a message receiving unit (not shown), and a notification target specifying unit (not shown).
  • the user information received by the user information receiving unit is stored in the user information storage unit.
  • the user information storage unit stores user information such as user identification information for message communication, a search location, and a type of notification operation in association with the user.
  • the user identification information for message communication is, for example, an e-mail address or a message exchange application ID.
  • the user information storage unit stores, as user information, information about the user, such as the name of the user, information indicating the physical characteristics of the user, personal belongings, clothes, intimacy with the robot 2, and the like. You may.
  • Information indicating the physical characteristics of the user includes, for example, information for recognizing the face of the notification target, information for recognizing the fingerprint of the notification target, information for recognizing the physique of the notification target, and the like. It is.
  • the robot 2 may include a user information receiving unit, a user information storage unit, a message receiving unit, and a notification target person specifying unit. In that case, the user information transmitting unit in the application of the user terminal 3 may transmit the user information to the robot 2.
  • FIG. 16 is a flowchart showing a notification operation by receiving a message.
  • the notification target person specifying unit refers to the user information storage unit and sets a message communication destination set for the message.
  • the user corresponding to the user identification information is specified as a notification target person (step S72).
  • the search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target (step S73).
  • the search unit 25 may specify the search location based on the location expression included in the received message. For example, when the location expression “ ⁇ bed” is included in the message, the search unit 25 may specify “bedroom” as the search location.
  • step S74 as described in step S22 of FIG. 5, the search unit 25 calculates the movement route to the search location. Then, in step S75, as described in step S23 of FIG. 5, the search unit 25 starts searching for a notification target person, and the movement control unit 23 controls the moving mechanism 29 to start moving to the search location.
  • step S76 as described in step S24 of FIG. 5, the movement control unit 23 determines that the robot 2 has reached the search location, and further in step S77, as described in step S25 of FIG. Finds the notification target person, in step S78, the notification operation execution unit 26 performs the notification operation.
  • the notification operation in this example is, for example, reading out a received message.
  • the type of the notification operation may be indicated by the received message.
  • the notification operation execution unit 26 may specify the type of the notification operation corresponding to the user who is the notification target person by referring to the user information storage unit.
  • step S73 the search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target. Further, in step S78, the notification operation execution unit 26 reads out the received message.
  • step S73 the search unit 25 specifies a search location based on a location expression included in the received message. Further, in step S78, the notification operation executing unit 26 specifies the type of the notification operation corresponding to the user who is the notification target person with reference to the user information storage unit.
  • step S73 the search unit 25 specifies a search location based on a location expression included in the received message. Further, in step S78, the notification operation execution unit 26 reads out the received message.
  • step S73 the search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target. Further, in step S78, the notification operation executing unit 26 specifies the type of the notification operation corresponding to the user who is the notification target person with reference to the user information storage unit.
  • the first to fourth methods may be arbitrarily combined.
  • the first method and the second method may be combined.
  • the first method and the third method may be combined.
  • the first method and the fourth method may be combined.
  • the second method and the third method may be combined.
  • the second method and the fourth method may be combined.
  • the third method and the fourth method may be combined.
  • the first method, the second method, and the third method may be combined.
  • the first method, the second method, and the fourth method may be combined.
  • the first method, the third method, and the fourth method may be combined.
  • the second system, the third system, and the fourth system may be combined.
  • the first method, the second method, the third method, and the fourth method may be combined.
  • step S73 when the received message includes the location expression, the search unit 25 specifies the search location by the location expression, and the received message includes the location expression. If not, the search location corresponding to the user to be notified may be specified by referring to the user information storage unit.
  • step S78 when the content of the received message is empty, the notification operation execution unit 26 refers to the user information storage unit and specifies the type of the notification operation corresponding to the user who is the notification target. If the content of the received message is not empty, the received message may be read aloud.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Electric Vacuum Cleaner (AREA)

Abstract

A robot comprising: an execution information acquisition unit that acquires execution information for executing a notification operation to notify a person to be notified of information to be executed; a search unit that searches for the person to be notified on the basis of the execution information acquired by the execution information acquisition unit; and a notification operation execution unit that executes the notification operation, for the person to be notified searched for by the search unit, on the basis of the execution information.

Description

ロボットならびにその制御方法および制御プログラムRobot and its control method and control program
 本発明は、ロボットならびにその制御方法および制御プログラムに関する。 The present invention relates to a robot, a control method thereof, and a control program.
 従来から、家屋内を自律的に移動しながらカメラで画像を撮影し、撮影画像から屋内の空間を認識し、認識している空間に基づき移動経路を設定して屋内を移動するロボットがある。ロボット移動経路の設定は、ロボットが移動する経路を定めたマップを利用者が予め作成することにより行われる。ロボットは作成されたマップに基づき定められた経路を移動することができる(例えば、特許文献1を参照)。 Conventionally, there is a robot that takes an image with a camera while moving autonomously in a house, recognizes an indoor space from the captured image, sets a movement route based on the recognized space, and moves indoors. The setting of the robot movement route is performed by the user creating a map in advance that defines the route on which the robot moves. The robot can move on a route determined based on the created map (for example, see Patent Literature 1).
 また、所定の時間にアラームを出力することができるロボットがある(例えば、特許文献2を参照)。 There is also a robot that can output an alarm at a predetermined time (for example, see Patent Document 2).
特開2016-103277号公報JP 2016-103277 A 特開2017-119320号公報JP-A-2017-119320
 しかし、従来のロボットは、アラーム等の情報を報知すべき特定の対象者がロボット近傍に存在していない場合、当該対象者に対する情報の報知ができない場合があった。 However, the conventional robot may not be able to notify the target person of information when a specific target to be notified of information such as an alarm is not present near the robot.
 本発明は上記事情に鑑みてなされたものであり、1つの実施形態において、特定の報知対象者に対して情報の報知をすることができる、ロボットならびにその制御方法および制御プログラムを提供することを一つの目的とする。 The present invention has been made in view of the above circumstances, and in one embodiment, provides a robot capable of notifying a specific notification target person of information and a control method and a control program thereof. One purpose.
 (1)上記の課題を解決するため、実施形態のロボットは、報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得部と、実行情報取得部において取得された実行情報に基づき、報知対象者を探索する探索部と、探索部において探索された報知対象者に対して、実行情報に基づき報知動作を実行する報知動作実行部とを備える。 (1) In order to solve the above-described problem, the robot according to the embodiment has an execution information acquisition unit that acquires execution information for executing an operation of informing information to be notified to a notification target person, and an execution information acquisition unit. And a notification operation executing unit that executes a notification operation based on the execution information for the notification target person searched for by the searching unit based on the execution information acquired in step (1).
 (2)また、実施形態のロボットにおいて、実行情報取得部は、利用者によって指定された場所に係る場所情報を実行情報として取得する。 {Circle around (2)} In the robot of the embodiment, the execution information acquisition unit acquires location information relating to a location designated by the user as execution information.
 (3)また、実施形態のロボットにおいて、実行情報取得部は、利用者が操作する利用者端末において表示された地図を利用者が操作することによって指定された場所情報を利用者端末から取得する。 (3) In the robot of the embodiment, the execution information obtaining unit obtains, from the user terminal, location information specified by the user operating the map displayed on the user terminal operated by the user. .
 (4)また、実施形態のロボットにおいて、探索部は、周囲の空間を撮影した撮影画像にさらに基づき報知対象者を探索する。 (4) In the robot according to the embodiment, the search unit further searches for a notification target person based on a captured image of the surrounding space.
 (5)また、実施形態のロボットにおいて、探索部は、撮影画像に含まれる人物を認識することにより報知対象者を探索する。 (5) In the robot of the embodiment, the search unit searches for a notification target person by recognizing a person included in the captured image.
 (6)また、実施形態のロボットにおいて、移動機構を制御する移動制御部さらに備え、探索部は、実行情報に基づき移動機構による移動経路を算出し、移動制御部は、探索部において算出された移動経路に基づき移動機構を制御する。 (6) The robot according to the embodiment further includes a movement control unit that controls the movement mechanism, wherein the search unit calculates a movement path of the movement mechanism based on the execution information, and the movement control unit calculates the movement path. The moving mechanism is controlled based on the moving path.
 (7)また、実施形態のロボットにおいて、探索部は、移動機構による移動を制限するための制限情報にさらに基づき移動経路を算出する。 {Circle around (7)} In the robot of the embodiment, the search unit calculates the movement route further based on the restriction information for restricting the movement by the moving mechanism.
 (8)また、実施形態のロボットにおいて、周囲の空間を撮影した撮影画像に含まれる所定のマーカを認識するマーカ認識部をさらに備え、移動制御部は、マーカ認識部において認識されたマーカに基づき移動機構を制御する。 (8) In the robot of the embodiment, the robot further includes a marker recognizing unit that recognizes a predetermined marker included in a captured image of the surrounding space, and the movement control unit is configured to perform the operation based on the marker recognized by the marker recognizing unit. Control the moving mechanism.
 (9)また、実施形態のロボットにおいて、探索部において探索された報知対象者の状態に係る状態情報を取得する状態情報取得部をさらに備え、報知動作実行部は、状態情報取得部において取得された状態に応じて報知動作を変更する。 (9) Further, in the robot of the embodiment, the robot further includes a state information acquisition unit that acquires state information relating to the state of the notification target person searched by the search unit, and the notification operation execution unit is acquired by the state information acquisition unit. The notification operation is changed according to the state of the notification.
 (10)また、実施形態のロボットにおいて、状態情報取得部は、報知対象者が睡眠中であるかまたは起床中であるかの状態を取得し、報知動作実行部は、報知動作として、状態が睡眠中である場合には報知対象者を目覚めさせる目覚動作を実行し、状態が起床中である場合には報知対象者に対する挨拶動作を実行する。 (10) In the robot of the embodiment, the state information obtaining unit obtains a state of whether the notification target person is sleeping or waking up, and the notification operation execution unit determines the state as the notification operation. When the user is asleep, an awakening operation for waking up the notification target person is executed, and when the state is awakening, a greeting operation for the notification target person is executed.
 (11)また、実施形態のロボットにおいて、実行情報取得部は、報知対象者に関連付けられた実行情報を取得し、報知動作実行部は、探索された報知対象者に関連付けられた報知動作を実行する。 (11) In the robot of the embodiment, the execution information acquisition unit acquires execution information associated with the notification target person, and the notification operation execution unit executes the notification operation associated with the searched notification target person. I do.
 (12)また、実施形態のロボットにおいて、実行情報取得部は、報知対象者とロボットとの親密度の高低を示す親密度情報を実行情報として取得し、報知動作実行部は、実行情報取得部において取得された親密度情報における親密度が所定の親密度条件を充足する場合に、他のロボットと協働して報知動作を実行する。 (12) In the robot of the embodiment, the execution information acquisition unit acquires, as execution information, intimacy information indicating the level of intimacy between the notification target person and the robot, and the notification operation execution unit includes the execution information acquisition unit. When the familiarity in the familiarity information acquired in satisfies a predetermined familiarity condition, the informing operation is executed in cooperation with another robot.
 (13)また、実施形態のロボットにおいて、実行情報取得部は、複数の報知対象者に関連付けられた実行情報を取得し、報知動作実行部は、複数の報知対象者に対してそれぞれ関連付けられた報知動作を平行して実行する。 (13) In the robot of the embodiment, the execution information acquisition unit acquires execution information associated with the plurality of notification targets, and the notification operation execution unit is associated with each of the notification targets. The notification operation is executed in parallel.
 (14)また、実施形態のロボットにおいて、実行情報取得部は、報知動作を実行する時間に係る時間情報を実行情報として取得し、報知動作実行部は、時間情報に基づき報知動作を実行する。 (14) Also, in the robot of the embodiment, the execution information acquisition unit acquires time information relating to the time for executing the notification operation as execution information, and the notification operation execution unit executes the notification operation based on the time information.
 (15)また、実施形態のロボットにおいて、マイクに入力された音声を認識して言語データに変換する音声認識部をさらに備え、実行情報取得部は、変換された言語データによって実行情報を特定する。 (15) The robot according to the embodiment further includes a voice recognition unit that recognizes voice input to the microphone and converts the voice into language data, and the execution information acquisition unit specifies execution information based on the converted language data. .
 (16)また、実施形態のロボットにおいて、報知動作実行部は、取得した実行情報に報知動作を実行する時間に係る時間情報が含まれない場合に、実行情報を取得した時点から所定時間が経過した時点で、報知動作を実行する。 (16) In the robot according to the embodiment, the notifying operation execution unit may determine that when the obtained execution information does not include time information related to a time for executing the notifying operation, the predetermined time elapses from the time when the execution information is obtained. At that point, the notification operation is executed.
 (17)また、実施形態のロボットにおいて、実行情報取得部は、実行情報に場所情報が含まれない場合に、音声を入力したマイクの所在場所を場所情報とする。 {Circle around (17)} In the robot of the embodiment, when the execution information does not include the location information, the execution information acquisition unit sets the location of the microphone to which the voice is input as the location information.
 (18)また、実施形態のロボットにおいて、報知動作を実行した後に、報知対象者を撮影する撮影部と、撮影した報知対象者の画像データを利用者端末へ送信する送信部とをさらに備える。 (18) The robot according to the embodiment further includes a photographing unit that photographs the notification target person after executing the notification operation, and a transmission unit that transmits the image data of the photographed notification target person to the user terminal.
 (19)また、実施形態のロボットにおいて、撮影部は、報知対象者とロボットとの親密度に応じて、報知対象者を撮影する条件を変更する。 (19) In the robot of the embodiment, the imaging unit changes the conditions for imaging the notification target person according to the intimacy between the notification target person and the robot.
 (20)また、実施形態のロボットにおいて、報知動作を実行した後に、報知対象者の状態に係る状態情報を取得する状態情報取得部と、取得した状態情報を利用者端末へ送信する送信部とをさらに備える。 (20) In the robot of the embodiment, a state information acquiring unit that acquires state information relating to the state of the person to be notified after executing the notifying operation, and a transmitting unit that transmits the acquired state information to the user terminal. Is further provided.
 (21)上記の課題を解決するため、ロボットは、ロボットの利用者宛のメッセージを受信するメッセージ受信部と、受信したメッセージの宛先によって報知対象者を特定する報知対象者特定部と、メッセージによって指定された場所又は報知対象者に対応して特定される場所において、報知対象者を探索する探索部と、探索部において探索された報知対象者に対して、メッセージの読み上げまたはメッセージで指示された報知動作を行う報知部とを備える。 (21) In order to solve the above-described problem, the robot includes a message receiving unit that receives a message addressed to a user of the robot, a notification target specifying unit that specifies a notification target by a destination of the received message, and a message. A search unit that searches for a notification target person at a specified place or a place that is specified in correspondence with the notification target person, and the notification target person searched by the search unit is read out or instructed by a message. A notification unit for performing a notification operation.
 (22)上記の課題を解決するため、実施形態のロボット制御方法は、ロボットにおいて、報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得ステップと、実行情報取得ステップにおいて取得された実行情報に基づき、報知対象者を探索する探索ステップと、探索ステップにおいて探索された報知対象者に対して、実行情報に基づき報知動作を実行する報知動作実行ステップとを含む。 (22) In order to solve the above-described problem, the robot control method according to the embodiment includes an execution information obtaining step of obtaining execution information for executing a notification operation of information to be performed on a notification target person in the robot; A search step of searching for a notification target person based on the execution information acquired in the execution information obtaining step; and a notification operation execution step of executing a notification operation based on the execution information for the notification target person searched in the search step. And
 (23)上記の課題を解決するため、実施形態のロボット制御プログラムは、ロボットに、報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得処理と、実行情報取得処理において取得された実行情報に基づき、報知対象者を探索する探索処理と、探索処理において探索された報知対象者に対して、実行情報に基づき報知動作を実行する報知動作実行処理とを実行させる。 (23) In order to solve the above-described problem, the robot control program according to the embodiment includes an execution information acquisition process for acquiring execution information for performing an operation of informing a notification target person of information to be executed by a robot; A search process for searching for a notification target person based on the execution information acquired in the execution information acquisition process, and a notification operation execution process for executing a notification operation based on the execution information for the notification target person searched for in the search process And execute.
 (24)上記の課題を解決するため、ロボット制御方法は、ロボットにおいて、ロボットの利用者宛のメッセージを受信するメッセージ受信ステップと、受信したメッセージの宛先によって報知対象者を特定する報知対象者特定ステップと、メッセージによって指定された場所又は報知対象者に対応して特定される場所において、報知対象者を探索する探索ステップと、探索ステップにおいて探索された報知対象者に対して、メッセージの読み上げまたはメッセージで指示された報知動作を行う報知ステップとを含む。 (24) In order to solve the above problem, the robot control method includes a message receiving step of receiving a message addressed to a user of the robot in the robot, and a notification target specifying the notification target by a destination of the received message. Step, a search step for searching for a notification target person at a place specified by the message or a place specified in correspondence with the notification target person, and, for the notification target person searched for in the search step, reading out a message or A notification step of performing a notification operation instructed by the message.
 (25)上記の課題を解決するため、ロボット制御プログラムは、ロボットに、ロボットの利用者宛のメッセージを受信するメッセージ受信処理と、受信したメッセージの宛先によって報知対象者を特定する報知対象者特定処理と、メッセージによって指定された場所又は報知対象者に対応して特定される場所において、報知対象者を探索する探索処理と、探索処理において探索された報知対象者に対して、メッセージの読み上げまたはメッセージで指示された報知動作を行う報知処理とを実行させる。 (25) In order to solve the above-described problem, the robot control program includes a message receiving process for receiving a message addressed to a user of the robot, and a notification target person specifying a notification target person by a destination of the received message. Processing, at a place specified by the message or at a place specified in correspondence with the notification target person, a search process for searching for the notification target person, and for the notification target person searched for in the search process, the reading of the message or And a notification process of performing a notification operation specified by the message.
 一つの実施形態によれば、ロボットならびにその制御方法および制御プログラムは、報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得し、取得された実行情報に基づき報知対象者を探索し、さらに探索された報知対象者に対して、実行情報に基づき報知動作を実行することにより、特定の報知対象者に対して情報の報知をすることが可能となる。 According to one embodiment, a robot, a control method thereof, and a control program obtain execution information for performing a notification operation of information to be performed to a notification target person, and perform notification based on the obtained execution information. By searching for a target person and further performing a notification operation on the searched notification target person based on the execution information, it becomes possible to notify the information to a specific notification target person.
実施形態における自律行動型ロボットのソフトウェア構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot according to the embodiment. 実施形態における自律行動型ロボットのハードウェア構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot according to the embodiment. 実施形態における自律行動型ロボット制御プログラムの動作の第1の例を示すフローチャートである。It is a flowchart which shows the 1st example of operation | movement of the autonomous behavior type robot control program in embodiment. 実施形態における自律行動型ロボット制御プログラムの動作の第2の例を示すフローチャートである。6 is a flowchart illustrating a second example of the operation of the autonomous behavior robot control program according to the embodiment. 実施形態における自律行動型ロボット制御プログラムが報知動作として目覚まし動作を実行する際のフローチャートである。It is a flowchart at the time of the autonomous behavior type robot control program in an embodiment performing a wake-up operation as a notification operation. 実施形態における実行情報の一例を示す図である。FIG. 4 is a diagram illustrating an example of execution information according to the embodiment. 実施形態における実行情報の設定方法の一例を示す図である。FIG. 6 is a diagram illustrating an example of a method for setting execution information according to the embodiment. 音声入力による実行情報取得に係る自律行動型ロボットのモジュール構成例を示す図である。It is a figure showing the example of module composition of the autonomous action type robot concerning execution information acquisition by voice input. 取付型デバイスの取付場所の特定に係るデータ提供装置のモジュール構成例を示す図である。It is a figure showing the example of module composition of the data providing device concerning specification of the attachment place of the attachment type device. 利用者端末の所在場所の特定に係るデータ提供装置のモジュール構成例を示す図である。FIG. 3 is a diagram illustrating an example of a module configuration of a data providing apparatus for specifying a location of a user terminal. ロボットの所在場所の特定に係るデータ提供装置のモジュール構成例を示す図である。FIG. 3 is a diagram illustrating an example of a module configuration of a data providing apparatus for specifying a location of a robot. 報知動作を実行した後の撮影画像及び状態情報を提供する自律行動型ロボットのモジュール構成例を示す図である。FIG. 4 is a diagram illustrating an example of a module configuration of an autonomous behavior robot that provides a captured image and state information after executing a notification operation. 2台のロボットが協働する報知動作の第1具体例を示す図である。FIG. 11 is a diagram illustrating a first specific example of a notification operation in which two robots cooperate. 2台のロボットが協働する報知動作の第2具体例を示す図である。FIG. 14 is a diagram illustrating a second specific example of the notification operation in which two robots cooperate. 2台のロボットが協働する報知動作の第3具体例を示す図である。FIG. 14 is a diagram illustrating a third specific example of the notification operation in which two robots cooperate. メッセージ受信による報知動作を示すフローチャートである。It is a flowchart which shows the alerting | reporting operation | movement by message reception.
 以下、図面を参照して本発明の一実施形態における自律行動型ロボットならびにその制御方法および制御プログラムについて詳細に説明する。 Hereinafter, an autonomous robot of an embodiment of the present invention, a control method thereof, and a control program will be described in detail with reference to the drawings.
 先ず、図1を用いて、自律行動型ロボット1のソフトウェア構成を説明する。図1は、実施形態における自律行動型ロボット1のソフトウェア構成の一例を示すブロック図である。 First, the software configuration of the autonomous behavior robot 1 will be described with reference to FIG. FIG. 1 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot 1 according to the embodiment.
 図1において、自律行動型ロボット1は、データ提供装置10およびロボット2を有する。データ提供装置10とロボット2は通信にて接続されて、自律行動型ロボット1として機能する。ロボット2は、撮影部21、マーカ認識部22、移動制御部23、状態情報取得部24、探索部25、報知動作実行部26、報知部27および移動機構29の各機能部を有する移動式ロボットである。データ提供装置10は、第1通信制御部11、点群データ生成部12、空間データ生成部13、可視化データ生成部14、撮影対象認識部15および第2通信制御部16の各機能部を有する。第1通信制御部11は、撮影画像取得部111、空間データ提供部112および指示部113の各機能部を有する。第2通信制御部16は、可視化データ提供部161、指定取得部162および実行情報取得部163の各機能部を有する。本実施形態における自律行動型ロボット1のデータ提供装置10の上記各機能部は、データ提供装置10を制御するデータ提供プログラム(ソフトウェア)によって実現される機能モジュールであるものとして説明する。また、ロボット2の、マーカ認識部22、移動制御部23、状態情報取得部24、探索部25、報知動作実行部26の各機能部は、自律行動型ロボット1におけるロボット2を制御するプログラムによって実現される機能モジュールであるものとして説明する。 In FIG. 1, the autonomous behavior robot 1 has a data providing device 10 and a robot 2. The data providing device 10 and the robot 2 are connected by communication and function as the autonomous behavior robot 1. The robot 2 includes a photographing unit 21, a marker recognizing unit 22, a movement control unit 23, a state information acquisition unit 24, a search unit 25, a notification operation execution unit 26, a notification unit 27, and a movement mechanism 29 having respective functional units. It is. The data providing apparatus 10 has functional units of a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, and a second communication control unit 16. . The first communication control unit 11 has functional units of a captured image acquisition unit 111, a spatial data providing unit 112, and an instruction unit 113. The second communication control unit 16 has functional units of a visualization data providing unit 161, a designation acquisition unit 162, and an execution information acquisition unit 163. The above-described functional units of the data providing device 10 of the autonomous behavior robot 1 according to the present embodiment will be described as being functional modules realized by a data providing program (software) that controls the data providing device 10. In addition, each function unit of the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 of the robot 2 is controlled by a program that controls the robot 2 in the autonomous behavior type robot 1. The description will be made assuming that the function module is implemented.
 自律行動型ロボット1における機能は、機能モジュールを追加、削除または変更(追加等)することにより、追加等することができる。自律行動型ロボット1における基本的な機能を「基本機能」、自律行動型ロボット1における追加的な機能を「追加機能」として説明する。以下の説明においては、データ提供装置10における、第1通信制御部11、点群データ生成部12、空間データ生成部13、可視化データ生成部14、撮影対象認識部15および第2通信制御部16(可視化データ提供部161、指定取得部162)、ならびに、ロボット2における、撮影部21、マーカ認識部22および移動制御部23を「基本機能」として説明する。また、データ提供装置10における実行情報取得部163、ならびにロボット2における、状態情報取得部24、探索部25および報知動作実行部26を「追加機能」として説明する。 The functions of the autonomous behavior robot 1 can be added by adding, deleting, or changing (adding, etc.) functional modules. The basic functions of the autonomous behavior robot 1 will be described as “basic functions”, and the additional functions of the autonomous behavior robot 1 will be described as “additional functions”. In the following description, the first communication control unit 11, the point cloud data generation unit 12, the spatial data generation unit 13, the visualization data generation unit 14, the imaging target recognition unit 15, and the second communication control unit 16 in the data providing device 10 The (visualization data providing unit 161, the designation acquisition unit 162), and the imaging unit 21, the marker recognition unit 22, and the movement control unit 23 of the robot 2 will be described as “basic functions”. Further, the execution information acquisition unit 163 in the data providing device 10 and the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 in the robot 2 will be described as “additional functions”.
 [基本機能]
 データ提供装置10は、自律行動型ロボット1の機能の一部を実行することができる装置であって、例えば、ロボット2と物理的に近い場所に設置され、ロボット2と通信し、ロボット2の処理の負荷を分散させるエッジサーバである。なお、本実施形態において自律行動型ロボット1は、データ提供装置10とロボット2とにおいて構成される場合を説明するが、データ提供装置10の機能は、ロボット2の機能に含まれるものであってもよい。また、ロボット2は、空間データに基づき移動可能なロボットであって、空間データに基づき移動範囲が定められるロボットの一態様である。データ提供装置10は、1つの筐体において構成されるものであっても、複数の筐体から構成されるものであってもよい。
[Basic function]
The data providing device 10 is a device that can execute a part of the functions of the autonomous behavior robot 1. For example, the data providing device 10 is installed at a location physically close to the robot 2, communicates with the robot 2, It is an edge server that distributes the processing load. In the present embodiment, a case will be described in which the autonomous behavior robot 1 is configured by the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. Is also good. The robot 2 is a robot that can move based on spatial data, and is an embodiment of a robot whose moving range is determined based on spatial data. The data providing apparatus 10 may be configured in one housing or may be configured in a plurality of housings.
 第1通信制御部11は、ロボット2との通信機能を制御する。ロボット2との通信方式は任意であり、例えば、無線LAN(Local Area Network)、Bluetooth(登録商標)、または赤外線通信等の近距離無線通信、もしくは有線通信等を用いることができる。第1通信制御部11が有する、撮影画像取得部111、空間データ提供部112および指示部113の各機能は、第1通信制御部11において制御される通信機能を用いてロボット2と通信する。 The first communication control unit 11 controls a communication function with the robot 2. The communication method with the robot 2 is arbitrary, and for example, a wireless LAN (Local Area Network), Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used. The functions of the captured image acquisition unit 111, the spatial data providing unit 112, and the instruction unit 113 included in the first communication control unit 11 communicate with the robot 2 using communication functions controlled by the first communication control unit 11.
 撮影画像取得部111は、ロボット2の撮影部21により撮影された撮影画像を取得する。撮影部21は、ロボット2に設けられて、ロボット2の移動に伴い撮影範囲を変更することができる。ここで、ロボット2の撮影部21、マーカ認識部22、移動制御部23、状態情報取得部24、探索部25、報知動作実行部26、報知部27および移動機構29について説明する。 The photographed image acquiring unit 111 acquires a photographed image photographed by the photographing unit 21 of the robot 2. The photographing unit 21 is provided in the robot 2 and can change the photographing range as the robot 2 moves. Here, the imaging unit 21, the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, the notification operation execution unit 26, the notification unit 27, and the movement mechanism 29 of the robot 2 will be described.
 撮影部21は、ロボット2の周囲の空間を撮影して空間要素を含む撮影画像を生成する。空間要素とは、ロボット2の周囲の空間に存在して空間を構成する要素であって、例えば、部屋の壁、段差、扉、部屋に置いてある家具、家電、荷物、観葉植物等である。撮影部21は、1台または複数台のカメラで構成することができる。例えば、撮影部21が2台のカメラで構成されるステレオカメラである場合、撮影部21は撮影対象である空間要素を異なる撮影角度から立体的に撮影することが可能となる。撮影部21が複数台のカメラで構成される場合、撮影画像はそれぞれのカメラで撮影された複数の画像データであっても、複数の画像データを合わせた1つの画像データであってもよい。撮影部21は、例えば、CCD(Charge-Coupled Device)センサまたはCMOS(Complementary Metal Oxide Semiconductor)センサ等の撮像素子を用いたビデオカメラである。2台のカメラ(ステレオカメラ)で空間要素を撮影することにより、空間要素の形状を測定することができる。また、撮影部21は、ToF(Time of Flight)技術を用いたカメラであってもよい。ToFカメラにおいては、変調された赤外光を空間要素に照射して、空間要素までの距離を測定することにより、空間要素の形状を測定することができる。また、撮影部21は、ストラクチャードライトを用いるカメラであってもよい。ストラクチャードライトは、ストライプ、または格子状のパターンの光を空間要素に投影するライトである。撮影部21は、ストラクチャードライトと別角度から空間要素を撮影することにより、投影されたパターンの歪みから空間要素の形状を測定することができる。撮影部21は、これらのカメラのいずれか1つ、または2つ以上の組合せであってもよい。 The photographing unit 21 photographs the space around the robot 2 and generates a photographed image including a space element. The space element is an element that exists in the space around the robot 2 and configures the space, and is, for example, a wall, a step, a door, furniture, home appliances, luggage, houseplants, and the like placed in the room. . The photographing unit 21 can be composed of one or a plurality of cameras. For example, when the photographing unit 21 is a stereo camera including two cameras, the photographing unit 21 can three-dimensionally photograph a spatial element to be photographed from different photographing angles. When the photographing unit 21 is constituted by a plurality of cameras, the photographed image may be a plurality of image data photographed by each camera or one image data obtained by combining the plurality of image data. The imaging unit 21 is a video camera using an imaging element such as a CCD (Charge-Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor. By photographing a spatial element with two cameras (stereo cameras), the shape of the spatial element can be measured. Further, the photographing unit 21 may be a camera using a ToF (Time of Flight) technology. In a ToF camera, the shape of a spatial element can be measured by irradiating modulated infrared light to the spatial element and measuring the distance to the spatial element. Further, the photographing unit 21 may be a camera using a structured light. A structured light is a light that projects light of a stripe or lattice pattern onto a spatial element. The imaging unit 21 can measure the shape of the spatial element from the distortion of the projected pattern by imaging the spatial element from a different angle from the structured light. The photographing unit 21 may be any one of these cameras or a combination of two or more of them.
 また、撮影部21は、ロボット2に取付けられてロボット2の移動に合わせて移動するものである。しかし、撮影部21は、ロボット2とは分離して設置されるものであってもよい。 The photographing unit 21 is attached to the robot 2 and moves in accordance with the movement of the robot 2. However, the imaging unit 21 may be installed separately from the robot 2.
 撮影部21で撮影された撮影画像は、第1通信制御部11に対応する通信方式において撮影画像取得部111に対して提供される。撮影された撮影画像は、ロボット2の記憶部に一時的に記憶されて、撮影画像取得部111は、リアルタイムにまたは所定の通信間隔で一時記憶された撮影画像を取得する。 The captured image captured by the capturing unit 21 is provided to the captured image acquisition unit 111 in a communication method corresponding to the first communication control unit 11. The captured image is temporarily stored in the storage unit of the robot 2, and the captured image acquisition unit 111 acquires the temporarily stored captured image in real time or at a predetermined communication interval.
 マーカ認識部22は、撮影部21において撮影された撮影画像に含まれる所定のマーカを認識する。マーカとは、ロボット2の移動の制限を示す空間要素である。移動の制限とは、ロボット2の移動に伴う動作を制限することであり、例えば、ロボット2の移動速度を制限すること、ロボット2の進入を禁止すること、または移動中におけるロボットの所定の動作(例えばロボット2からの音の発生)を禁止すること等である。マーカは、撮影画像から認識可能な物品の形状、模様もしくは色彩又はこれらの結合である。利用者がロボット2の移動を制限する位置にマーカを設置することにより、ロボット2が撮影部21で空間を撮影したときに、家具等とともに撮影される。マーカは、平面的な物品であっても立体的な物品であってもよい。マーカは、例えば、二次元コードまたは特定の色の組合せもしくは形状が印刷されたシールまたは用紙等である。また、マーカは特定の色や形状の置物または敷物等であってもよい。このように、印刷物や身の回りにある物をマーカとして利用することにより、利用者はマーカの電源を確保したり、設置場所を確保したりする必要がない。また、部屋の雰囲気を損なうこと無く、利用者の意思でロボットの移動を制限できる。また、利用者もマーカを視認できるので、移動制限範囲を直観的に把握でき、制限範囲の変更も容易にできる。マーカは利用者によって、例えば壁や家具に貼付され、または床に載置されることにより設置される。マーカ認識部17は、撮影画像に含まれるマーカの画像を認識することにより、ロボット2の移動が制限されることを認識することができる。 The marker recognizing unit 22 recognizes a predetermined marker included in the image captured by the image capturing unit 21. The marker is a space element indicating a restriction on movement of the robot 2. The movement restriction refers to restricting an operation associated with the movement of the robot 2, for example, restricting the movement speed of the robot 2, prohibiting the robot 2 from entering, or performing a predetermined operation of the robot 2 during the movement. (For example, generation of a sound from the robot 2). The marker is the shape, pattern or color of the article recognizable from the captured image or a combination thereof. When the user places a marker at a position where the movement of the robot 2 is restricted, when the robot 2 takes a picture of the space with the photographing unit 21, the picture is taken together with furniture and the like. The marker may be a planar article or a three-dimensional article. The marker is, for example, a sticker or paper on which a two-dimensional code or a specific color combination or shape is printed. The marker may be an ornament or a rug having a specific color or shape. In this way, by using printed matter or objects around us as markers, the user does not need to secure the power supply of the markers or secure the installation location. Further, the movement of the robot can be restricted by the user's intention without impairing the atmosphere of the room. In addition, since the user can visually recognize the marker, the user can intuitively grasp the movement restriction range, and can easily change the restriction range. The marker is set by the user, for example, by being attached to a wall or furniture, or placed on the floor. The marker recognizing unit 17 can recognize that the movement of the robot 2 is restricted by recognizing the marker image included in the captured image.
 マーカ認識部22は、マーカの視覚的な特徴を予め記憶しておく。例えば、マーカ認識部22は、マーカとして認識すべき二次元コードや立体物を予め記憶しておく。マーカ認識部22は、利用者によって予め登録された物をマーカとして認識するようにしてもよい。例えば、利用者が利用者端末3のカメラで撮影した植木鉢をマーカとして登録した場合、廊下等に設置された植木鉢をマーカとして認識することができる。したがって、利用者はマーカを設置する場所において違和感が無い物をマーカとして設置することが可能となる。なお、マーカ認識部22は、物以外の空間要素をマーカとして認識するようにしてもよい。例えば、マーカ認識部22は、利用者が腕を体の正面でクロスさせる等の利用者のジェスチャをマーカとして認識するようにしてもよい。マーカ認識部22は、利用者がジェスチャをした位置をマーカの設置位置として認識する。 The marker recognizing unit 22 stores the visual characteristics of the marker in advance. For example, the marker recognizing unit 22 previously stores a two-dimensional code or a three-dimensional object to be recognized as a marker. The marker recognizing unit 22 may recognize an object registered in advance by a user as a marker. For example, when a user registers a flowerpot photographed by the camera of the user terminal 3 as a marker, the flowerpot installed in a corridor or the like can be recognized as a marker. Therefore, the user can install an object that does not cause discomfort at the place where the marker is installed as the marker. The marker recognizing unit 22 may recognize a spatial element other than an object as a marker. For example, the marker recognizing unit 22 may recognize a gesture of the user, such as a user crossing an arm in front of the body, as a marker. The marker recognizing unit 22 recognizes a position where the user makes a gesture as a marker installation position.
 マーカ認識部22は、マーカが貼付されまたは設置等された位置(以下、「設置位置」という。)を認識する。設置位置とは、空間データにおけるマーカが設置された空間の中の位置である。設置位置は、例えば、ロボット2が認識している空間データに基づき、ロボット2の現在位置と撮影されたマーカとの距離において認識することができる。例えば、マーカの大きさが予め分かっている場合、マーカ認識部22は、撮影画像に含まれるマーカ画像の大きさから、ロボット2とマーカの距離を算出し、ロボット2の現在位置と撮影方向(例えば、図示しない方位計による方位)に基づき、マーカの設置位置を認識することができる。また、設置位置は空間における位置が既に分かっている空間要素からマーカまでの相対的な位置から認識するようにしてもよい。例えば、ドアの位置が既に分かっている場合、マーカ認識部22は、マーカとドアの相対的な位置から設置位置を認識するようにしてもよい。また、撮影画像がデプスカメラにおいて撮影されたものである場合、設置位置はデプスカメラで撮影されたマーカの撮影深度に基づき認識することが可能となる。 The marker recognizing unit 22 recognizes the position where the marker is attached or installed (hereinafter, referred to as “installation position”). The installation position is a position in the space where the marker in the space data is installed. The installation position can be recognized based on, for example, the spatial data recognized by the robot 2 based on the distance between the current position of the robot 2 and the captured marker. For example, when the size of the marker is known in advance, the marker recognizing unit 22 calculates the distance between the robot 2 and the marker from the size of the marker image included in the captured image, and determines the current position of the robot 2 and the capturing direction ( For example, the installation position of the marker can be recognized based on an azimuth (not shown). The installation position may be recognized from a relative position from a spatial element whose position in space is already known to a marker. For example, when the position of the door is already known, the marker recognizing unit 22 may recognize the installation position from the relative position of the marker and the door. Further, when the captured image is captured by a depth camera, the installation position can be recognized based on the depth of the marker captured by the depth camera.
 移動制御部23は、移動機構29を制御する。移動制御部23は、探索部25において算出された移動経路(後述)に基づき移動機構29を制御することができる。移動制御部23は、移動機構29を制御することにより、ロボット2の移動方向および移動速度を制御する。移動制御部23は、ロボット2が認識している空間データに基づき、ロボット2の現在位置を認識することができる。移動制御部23は、移動経路における現在位置から移動機構29を制御して、ロボット2を移動させることができる。移動制御部23は、例えば、壁や廊下等の空間要素から現在位置を適宜補正して移動経路を移動するようにしてもよい。 The movement control unit 23 controls the movement mechanism 29. The movement control unit 23 can control the movement mechanism 29 based on the movement route (described later) calculated by the search unit 25. The movement control unit 23 controls the moving direction and the moving speed of the robot 2 by controlling the moving mechanism 29. The movement control unit 23 can recognize the current position of the robot 2 based on the spatial data recognized by the robot 2. The movement control unit 23 can move the robot 2 by controlling the movement mechanism 29 from the current position on the movement path. The movement control unit 23 may move the movement route by appropriately correcting the current position from a space element such as a wall or a corridor, for example.
 また、移動制御部23は、マーカ認識部22において認識されたマーカに基づき移動機構29を制御してもよい。例えば、移動制御部23は、マーカ認識部22において認識されたマーカの設置位置に基づき移動機構29による移動を制限する。マーカの設置位置とは、1つまたは複数のマーカの設置位置に基づき設定される点、線、面または空間を含む。 The movement control unit 23 may control the movement mechanism 29 based on the marker recognized by the marker recognition unit 22. For example, the movement control unit 23 restricts movement by the movement mechanism 29 based on the installation position of the marker recognized by the marker recognition unit 22. The marker installation position includes a point, a line, a surface, or a space set based on one or more marker installation positions.
 空間データ提供部112は、ロボット2に対して空間データ生成部13において生成された空間データを提供する。空間データは、ロボット2が存在している空間において、ロボットが認識している空間要素をデータ化したものである。ロボット2は、空間データに定められた範囲内において移動することができる。すなわち、空間データはロボット2において移動可能範囲を定めるための地図として機能する。ロボット2は、空間データ提供部112から空間データを提供される。例えば、空間データには、ロボット2が移動できない壁、家具、電化製品、段差等の空間要素の位置データを含めることができる。ロボット2は、提供された空間データに基づき、自身が移動できる場所か否かの判断をすることができる。また、ロボット2は、空間データの中に未生成の範囲が含まれるか否かを認識できるようにしてもよい。未生成の範囲が含まれるか否かは、例えば、空間データの一部に空間要素がない空間が含まれているか否かで判断することができる。 The spatial data providing unit 112 provides the robot 2 with the spatial data generated by the spatial data generating unit 13. The spatial data is data obtained by converting spatial elements recognized by the robot in the space where the robot 2 exists. The robot 2 can move within a range defined by the spatial data. That is, the spatial data functions as a map for determining a movable range in the robot 2. The robot 2 is provided with spatial data from the spatial data providing unit 112. For example, the spatial data can include position data of spatial elements such as walls, furniture, appliances, steps, etc., on which the robot 2 cannot move. The robot 2 can determine whether or not it is a place where it can move, based on the provided spatial data. Further, the robot 2 may be configured to be able to recognize whether or not an ungenerated range is included in the spatial data. Whether or not the ungenerated range is included can be determined, for example, based on whether or not a part of the spatial data includes a space having no spatial element.
 指示部113は、ロボット2に対して、空間データ生成部13において生成された空間データに基づく撮影を指示する。空間データ生成部13は、撮影画像取得部111において取得された撮影画像に基づき空間データを作成するため、例えば、室内の空間データを作成する場合、撮影されていない部分については空間データが未作成の部分を含む場合がある。また、撮影画像が不鮮明等であると、作成された空間データにノイズが含まれてしまい空間データに不正確な部分が含まれてしまう場合がある。指示部113は、空間データに未生成の部分がある場合、未生成の部分についての撮影指示をするようにしてもよい。また、指示部113は、空間データが不正確な部分が含まれている場合、不正確な部分についての撮影指示をするようにしてもよい。指示部113は、空間データに基づき、自発的に撮影を指示してもよい。なお、指示部113は、空間データに基づき生成された可視化データ(後述)を確認した利用者からの明示的な指示に基づき、撮影を指示してもよい。利用者は、可視化データに含まれる領域を指定して、ロボット2に対して、撮影を指示することにより、空間を認識して空間データを生成させることができる。 The instructing unit 113 instructs the robot 2 to shoot based on the spatial data generated by the spatial data generating unit 13. The spatial data generating unit 13 creates spatial data based on the captured image acquired by the captured image acquiring unit 111. For example, when creating indoor spatial data, spatial data is not created for a part that is not imaged. May be included. Further, if the captured image is unclear or the like, the generated spatial data may include noise and the spatial data may include an inaccurate portion. If there is an ungenerated portion in the spatial data, the instruction unit 113 may issue a shooting instruction for the ungenerated portion. In addition, when the space data includes an inaccurate portion, the instructing unit 113 may instruct the imaging of the inaccurate portion. The instruction unit 113 may spontaneously instruct shooting based on the spatial data. Note that the instruction unit 113 may instruct shooting based on an explicit instruction from a user who has confirmed visualization data (described later) generated based on spatial data. The user can specify a region included in the visualization data and instruct the robot 2 to shoot, thereby recognizing a space and generating space data.
 指示部113は、領域に設置されたマーカの撮影を指示するものであってもよい。空間データの作成が指示された領域における撮影は、例えば、ロボット2(撮影部21)の座標位置、撮影部21の撮影方向、解像度等の撮影条件を含んでいてもよい。空間データ生成部13は、作成が指示された空間データが未生成の領域に関するものである場合、既存の空間データに新たに作成された空間データを追加し、空間データ生成部13は、作成が指示された空間データが再作成に係るものである場合、既存の空間データを更新した空間データを生成する。また、撮影画像にマーカが含まれていた場合、認識されたマーカを含む空間データを生成するようにしてもよい。 The instructing unit 113 may instruct to shoot a marker set in the area. The shooting in the area where the creation of the spatial data is instructed may include, for example, shooting conditions such as the coordinate position of the robot 2 (the shooting unit 21), the shooting direction of the shooting unit 21, and the resolution. When the spatial data instructed to be created relates to an ungenerated area, the spatial data generating unit 13 adds the newly created spatial data to the existing spatial data. If the designated spatial data is for re-creation, the spatial data is generated by updating the existing spatial data. When a marker is included in the captured image, spatial data including the recognized marker may be generated.
 点群データ生成部12は、撮影画像取得部111において取得された撮影画像に基づき空間要素の三次元の点群データを生成する。点群データ生成部12は、撮影画像に含まれる空間要素を所定の空間における三次元の点の集合に変換して点群データを生成する。空間要素は、上述のように、部屋の壁、段差、扉、部屋に置いてある家具、家電、荷物、観葉植物等である。点群データ生成部12は、空間要素の撮影画像に基づき点群データを生成するため、点群データは撮影された空間要素の表面の形状を表すことになる。撮影画像は、ロボット2の撮影部21が、所定の撮影位置において所定の撮影角度で撮影することにより生成される。したがって、ロボット2が正面の位置から家具等の空間要素を撮影した場合、撮影されていない家具の裏側等の形状については点群データを生成することができず、家具の裏側にロボット2が移動可能な空間があったとしても、ロボット2はそれを認識することができない。一方、ロボット2が移動して側面の撮影位置から家具を撮影すると、家具等の空間要素の裏側の形状について点群データを生成することができるので、空間を正しく把握することが可能となる。 The point cloud data generation unit 12 generates three-dimensional point cloud data of a spatial element based on the captured image acquired by the captured image acquisition unit 111. The point cloud data generation unit 12 generates point cloud data by converting a spatial element included in the captured image into a set of three-dimensional points in a predetermined space. The space elements are, as described above, the walls, steps, doors, furniture, home appliances, luggage, houseplants, and the like placed in the room. Since the point cloud data generation unit 12 generates the point cloud data based on the captured image of the space element, the point cloud data represents the shape of the surface of the captured space element. The photographed image is generated by the photographing unit 21 of the robot 2 photographing at a predetermined photographing position at a predetermined photographing angle. Therefore, when the robot 2 photographs a spatial element such as furniture from the front position, point cloud data cannot be generated for the shape of the back side of the furniture that is not photographed, and the robot 2 moves to the back side of the furniture. Even if there is a possible space, the robot 2 cannot recognize it. On the other hand, when the robot 2 moves and photographs furniture from the photographing position on the side surface, point cloud data can be generated for the shape on the back side of a space element such as furniture, so that the space can be correctly grasped.
 空間データ生成部13は、点群データ生成部12において生成された空間要素の点群データに基づきロボット2の移動可能範囲を定める空間データを生成する。空間データは空間における点群データに基づき生成されるため、空間データに含まれる空間要素に関しても三次元の座標情報を有している。座標情報には、点の位置、長さ(高さを含む)、面積、または体積の情報が含まれていてもよい。ロボット2は、生成された空間データに含まれる空間要素の位置情報に基づき、移動が可能な範囲を判断することが可能となる。例えば、ロボット2が床面を水平移動する移動機構29を有するものである場合、ロボット2は、空間データにおいて空間要素である床面からの段差が所定の高さ以上(例えば、1cm以上)である場合移動が不可能であると判断することができる。一方、空間データにおいて、空間要素であるテーブルの天板またはベッド等が床面から所定の高さを有する場合、ロボット2は、床面からの高さが所定の高さ以上(例えば、60cm以上)の範囲を自身の高さとのクリアランスを考慮して移動可能な範囲として判断する。また、ロボット2は、空間データにおいて、空間要素である壁と家具の隙間が所定の幅以上(例えば、40cm以上)である範囲を自身の幅とのクリアランスを考慮して移動可能な範囲として判断する。 The space data generation unit 13 generates space data that defines a movable range of the robot 2 based on the point cloud data of the space elements generated by the point cloud data generation unit 12. Since spatial data is generated based on point cloud data in space, spatial elements included in spatial data also have three-dimensional coordinate information. The coordinate information may include information on the position, length (including height), area, or volume of the point. The robot 2 can determine the movable range based on the position information of the spatial element included in the generated spatial data. For example, when the robot 2 has the moving mechanism 29 that horizontally moves on the floor, the robot 2 is configured such that the step from the floor, which is a spatial element in the spatial data, is equal to or more than a predetermined height (for example, 1 cm or more). In some cases, it can be determined that movement is impossible. On the other hand, in the spatial data, when the table top or bed or the like, which is a spatial element, has a predetermined height from the floor, the robot 2 determines that the height from the floor is higher than the predetermined height (for example, 60 cm or more). The range of ()) is determined as a movable range in consideration of the clearance with its own height. In addition, the robot 2 determines, in the spatial data, a range in which the gap between the wall and the furniture, which is a space element, is equal to or greater than a predetermined width (for example, 40 cm or more) as a movable range in consideration of a clearance between itself and the width. I do.
 空間データ生成部13は、空間における所定のエリアについて属性情報を設定してもよい。属性情報とは、所定のエリアについてロボット2の移動条件を定めた情報である。移動条件とは、例えば、ロボット2が移動可能な空間要素とのクリアランスを定めた条件である。例えば、ロボット2が移動可能な通常の移動条件が、クリアランスが30cm以上である場合、所定のエリアについてのクリアランスを例外的に5cm以上とした属性情報を設定することができる。また、属性情報において設定する移動条件は、ロボットの移動を制限する情報を設定してもよい。移動の制限とは、例えば、移動速度の制限、または進入の禁止等である。例えば、クリアランスが小さいエリアや人が存在しているエリアにおいて、ロボット2の移動速度を落とした属性情報を設定してもよい。また、属性情報において設定する移動条件は、エリアの床材によって定められるものであってもよい。例えば、属性情報は、床がクッションフロア、フローリング、畳、またはカーペットにおいて移動機構29の動作(走行速度または走行手段等)の変更を設定するものであってもよい。また、属性情報には、ロボット2が移動して充電できる充電スポット、ロボット2の姿勢が不安定になるため移動が制限される段差またはカーペットの端等における移動条件を設定できるようにしてもよい。なお、属性情報を設定したエリアは、後述する可視化データにおいて表示方法を変更する等、利用者が把握できるようにしてもよい。 The space data generation unit 13 may set attribute information for a predetermined area in the space. The attribute information is information that defines a moving condition of the robot 2 for a predetermined area. The moving condition is, for example, a condition that defines a clearance with a space element to which the robot 2 can move. For example, when a normal moving condition under which the robot 2 can move has a clearance of 30 cm or more, attribute information in which the clearance for a predetermined area is exceptionally 5 cm or more can be set. Further, as the movement condition set in the attribute information, information for restricting movement of the robot may be set. The movement restriction is, for example, a restriction on a moving speed or a prohibition of entry. For example, attribute information in which the moving speed of the robot 2 is reduced may be set in an area having a small clearance or an area where a person exists. The moving condition set in the attribute information may be determined by the floor material in the area. For example, the attribute information may be for setting a change in the operation (running speed or running means, etc.) of the moving mechanism 29 when the floor is a cushion floor, flooring, tatami, or carpet. In addition, the attribute information may be set such that a charging spot at which the robot 2 can be moved and charged, a step at which movement of the robot 2 is restricted because the posture of the robot 2 becomes unstable, or a moving condition at a carpet edge or the like can be set. . The area in which the attribute information is set may be configured so that the user can grasp the area, for example, by changing the display method in the visualization data described later.
 空間データ生成部13は、点群データ生成部12において生成された点群データを、例えば、ハフ変換して、点群データにおいて共通する直線や曲線等の図形を抽出し、抽出された図形により表現される空間要素の輪郭によって空間データを生成する。ハフ変換は、点群データを特徴点とした場合、特徴点を最も多く通過する図形を抽出する座標変換方法である。点群データは、部屋に置いてある家具等の空間要素の形状を点群において表現するものであるため、利用者は、点群データで表現される空間要素が何なのかの判別(例えば、テーブル、椅子、壁等の認識)をするのが困難な場合がある。空間データ生成部13は、点群データをハフ変換することにより、家具等の輪郭を表現することができるので、利用者が空間要素を判別しやすくすることができる。なお、空間データ生成部13は、点群データ生成部12において生成された点群データを、画像認識において認識された空間要素(例えば、テーブル、椅子、壁等)における基本形状に変換して空間データを生成してもよい。テーブル等の空間要素は、画像認識でテーブルであることが認識されることにより、空間要素の一部の点群データ(例えば、テーブルを正面から見たときの点群データ)からテーブルの形状を正確に予測することができる。空間データ生成部13は、点群データと画像認識を組み合わせることにより、空間要素を正確に把握した空間データを生成することが可能となる。 The spatial data generation unit 13 performs, for example, Hough transform on the point cloud data generated by the point cloud data generation unit 12 to extract a graphic such as a straight line or a curve common to the point cloud data. Spatial data is generated according to the contour of the represented spatial element. The Hough transform is a coordinate transformation method for extracting a figure which passes through the feature points most, when the point group data is a feature point. Since the point cloud data expresses the shape of a space element such as furniture placed in a room in a point cloud, the user determines which space element is represented by the point cloud data (for example, Recognition of tables, chairs, walls, etc.) can be difficult. The spatial data generation unit 13 can represent the outline of furniture or the like by Hough transforming the point cloud data, so that the user can easily determine the spatial element. The space data generation unit 13 converts the point cloud data generated by the point cloud data generation unit 12 into a basic shape of a space element (for example, a table, a chair, a wall, or the like) recognized in the image recognition, and converts the space into a basic shape. Data may be generated. When a spatial element such as a table is recognized as a table by image recognition, the shape of the table is converted from a part of the point group data of the spatial element (for example, point cloud data when the table is viewed from the front). It can be accurately predicted. By combining the point cloud data and the image recognition, the spatial data generating unit 13 can generate spatial data in which spatial elements are accurately grasped.
 空間データ生成部13は、ロボット2の移動した位置から所定の範囲に含まれる点群データに基づき空間データを生成する。ロボット2の移動した位置からの所定の範囲とは、ロボット2が実際に移動した位置を含み、例えば、ロボット2が実際に移動した位置から30cm等の距離にある範囲であってもよい。点群データは、ロボット2の撮影部21により撮影された撮影画像に基づき生成されるため、撮影画像にはロボット2から離れた位置にある空間要素が含まれる場合がある。撮影部21から空間要素までが離れている場合、撮影されていない部分が存在し、または撮影されていない障害物の存在によって実際にはロボット2が移動できない範囲が存在する場合がある。また、廊下等のように撮影部21から遠い位置にある空間要素が撮影画像に含まれる場合、特徴点において抽出された空間要素が歪んでしまう場合がある。また、撮影距離が大きい場合、撮影画像に含まれる空間要素が小さくなるため、点群データの精度が低くなる場合がある。空間データ生成部13は、大きく離れている特徴点を無視することにより、精度が低い空間要素、または歪んだ空間要素を含まない空間データを生成するようにしてもよい。空間データ生成部13は、ロボット2の移動した位置から所定の範囲の外側にある点群データを削除して空間データを生成することにより、実際にはデータが存在しない飛び地が発生することを防ぎ、ロボット2が移動できない範囲を含まず、またデータ精度の高い空間データを生成することが可能となる。また、空間データから生成される可視化データにおいて飛び地状の描画を防ぐことができ、視認性を向上させることができる。 The space data generation unit 13 generates space data based on point cloud data included in a predetermined range from the position where the robot 2 has moved. The predetermined range from the position where the robot 2 has moved includes the position where the robot 2 has actually moved, and may be, for example, a range having a distance of 30 cm or the like from the position where the robot 2 has actually moved. Since the point cloud data is generated based on an image captured by the image capturing unit 21 of the robot 2, the captured image may include a spatial element at a position distant from the robot 2. When the space element is separated from the imaging unit 21, there may be a part where the robot 2 cannot actually move due to the presence of a part that has not been captured or the presence of an obstacle that has not been captured. Further, when a spatial element at a position far from the photographing unit 21 such as a corridor is included in the photographed image, the spatial element extracted at the feature point may be distorted. Further, when the shooting distance is long, the spatial element included in the shot image is small, and the accuracy of the point cloud data may be low. The spatial data generating unit 13 may generate spatial data that does not include a low-accuracy spatial element or a distorted spatial element by ignoring feature points that are far apart. The spatial data generation unit 13 deletes point cloud data outside a predetermined range from the position to which the robot 2 has moved to generate spatial data, thereby preventing an enclave where no data actually exists from occurring. Therefore, it is possible to generate spatial data that does not include a range in which the robot 2 cannot move and has high data accuracy. Also, in the visualization data generated from the spatial data, engraved drawing can be prevented, and the visibility can be improved.
 空間データ生成部13は、マーカ認識部22においてマーカが認識された場合、生成した空間データに対して制限範囲を設定する。空間データに対して制限範囲を設定することにより、制限範囲を可視化データの一部として可視化することが可能となる。また、空間データ生成部13は、状態情報取得部24において状態情報が取得された場合、空間データに対して状態情報を設定する。空間データに対して状態情報を設定することにより、状態情報を可視化データの一部とすることが可能となる。 When the marker recognition unit 22 recognizes a marker, the spatial data generation unit 13 sets a limited range for the generated spatial data. By setting a limit range for the spatial data, the limit range can be visualized as a part of the visualization data. When the state information is acquired by the state information acquiring unit 24, the spatial data generating unit 13 sets the state information for the spatial data. By setting the state information for the spatial data, the state information can be made a part of the visualization data.
 可視化データ生成部14は、空間データ生成部13において生成された空間データに基づいて、空間に含まれる空間要素を人が直観的に判別できるように可視化した可視化データを生成する。 The visualization data generation unit 14 generates visualization data based on the spatial data generated by the spatial data generation unit 13 so that a person can intuitively determine a space element included in the space.
 一般的に、ロボットは、カメラやマイク等の様々なセンサを有し、それらのセンサから得られる情報を総合的に判断することで周囲の状況を認識する。ロボットが移動するためには、空間に存在する種々の物体を認識し、空間データにおいて移動ルートを判断する必要があるが、物体を正しく認識できないために移動ルートが適切で無いことがある。誤認識が原因となり、例えば、人が十分に広い空間があると思っても、ロボットは障害物があり狭い範囲しか動けないとして認識してしまう場合がある。このように人とロボットとの間に認識の齟齬が生じると、人の期待に反した行動をロボットがおこなうことになり、人はストレスを感じる。本実施形態における自律型行動ロボット1は、人とロボットの認識の齟齬を減らすために、自身の認識状態である空間データを可視化して人に提供するとともに、人に指摘された箇所に対して再度認識処理をおこなうことができる。 ロ ボ ッ ト Generally, a robot has various sensors such as a camera and a microphone, and recognizes surrounding conditions by comprehensively judging information obtained from those sensors. In order for the robot to move, it is necessary to recognize various objects existing in the space and determine the movement route in the spatial data. However, the movement route may not be appropriate because the object cannot be recognized correctly. Due to erroneous recognition, for example, even if a person thinks that there is a sufficiently large space, the robot may recognize that there is an obstacle and that the robot can move only in a small area. When a discrepancy in recognition occurs between the human and the robot in this way, the robot performs an action that is contrary to human expectations, and the human feels stress. The autonomous behavior robot 1 in the present embodiment visualizes and provides spatial data, which is its own recognition state, to a person in order to reduce inconsistencies between the recognition of the person and the robot, and also provides a method for recognizing a part pointed out by the person. The recognition process can be performed again.
 空間データは、自律行動型ロボット1が認識している空間要素を含むデータであるのに対して、可視化データは、自律行動型ロボット1が認識している空間要素を利用者が視認するためのデータである。空間データには、誤認識された空間要素が含まれる場合がある。空間データを可視化することにより、自律行動型ロボット1における空間要素の認識状態(誤認識の有無等)を人が確認し易くなる。 The spatial data is data including a spatial element recognized by the autonomous robot 1, whereas the visualization data is data for allowing the user to visually recognize the spatial element recognized by the autonomous robot 1. Data. The spatial data may include a misrecognized spatial element. By visualizing the spatial data, it becomes easier for a person to confirm the recognition state (whether or not there is an erroneous recognition) of the spatial element in the autonomous robot 1.
 可視化データは、表示装置において表示可能なデータである。可視化データは、いわゆる間取りであり、壁として認識された空間要素に囲まれた領域の中に、テーブル、椅子、ソファー等として認識された空間要素が含まれる。可視化データ生成部14は、ハフ変換によって抽出された図形において形成される家具等の形状を、例えばRGBデータで表現される可視化データとして生成する。空間データ生成部13は、空間要素の三次元における平面の方向に基づき、平面の描画方法を変更した可視化データを生成する。空間要素の三次元における平面の方向とは、例えば、点群データ生成部12において生成された点群データをハフ変換して、点群データにおいて生成された図形で形成される平面の法線方向である。可視化データ生成部14は、法線方向に応じて平面の描画方法を変更した可視化データを生成する。描画方法とは、例えば、平面に付与する色相、明度または彩度等の色属性、平面に付与する模様、またはテクスチャ等である。例えば、可視化データ生成部14は、平面の法線が垂直方向(平面が水平方向)である場合、平面の明度を高くして明るい色で描画する。一方、可視化データ生成部14は、平面の法線が水平方向(平面が垂直方向)である場合、平面の明度を低くして暗い色で描画する。平面の描画方法を変更することにより、家具等の形状を立体的に表現することが可能となり、利用者が家具等の形状を確認しやすくすることができる。また、可視化データは、空間データに含まれる各空間要素の座標情報と対応づけられた可視化データにおける座標情報(「可視化座標情報」という。)を含んでいてもよい。可視化座標情報は、座標情報と対応付けられているため、可視化座標情報における点は実際の空間における点に対応し、また、可視化座標情報における面は実際の空間における面に対応している。したがって、利用者が可視化データにおいてある点の位置を特定すると、それに対応した実際の部屋における点の位置が特定できることになる。また、座標系を変換するための変換関数を用意し、可視化データにおける座標系と、空間データにおける座標系とを相互に変換できるようにしてもよい。もちろん、可視化データにおける座標系と、実際の空間における座標系とを相互に変換できるようにしてもよい。 Visualization data is data that can be displayed on a display device. The visualization data is a so-called floor plan, and includes a space element recognized as a table, a chair, a sofa, or the like in an area surrounded by the space element recognized as a wall. The visualization data generation unit 14 generates a shape of furniture or the like formed in the figure extracted by the Hough transform as, for example, visualization data represented by RGB data. The spatial data generation unit 13 generates visualization data in which the drawing method of the plane is changed based on the direction of the three-dimensional plane of the spatial element. The direction of the three-dimensional plane of the spatial element is, for example, the direction of the normal to the plane formed by the figure generated in the point cloud data by subjecting the point cloud data generated by the point cloud data generation unit 12 to Hough transform. It is. The visualization data generation unit 14 generates visualization data in which the plane drawing method is changed according to the normal direction. The drawing method is, for example, a color attribute such as hue, lightness or saturation to be given to a plane, a pattern to be given to a plane, or a texture. For example, when the normal of the plane is in the vertical direction (the plane is the horizontal direction), the visualization data generation unit 14 increases the brightness of the plane and draws the plane in a bright color. On the other hand, when the normal of the plane is in the horizontal direction (the plane is in the vertical direction), the visualization data generation unit 14 renders the plane with a lower brightness and draws in a dark color. By changing the drawing method of the plane, the shape of furniture and the like can be expressed three-dimensionally, and the user can easily check the shape of furniture and the like. Further, the visualization data may include coordinate information in the visualization data associated with the coordinate information of each space element included in the space data (referred to as “visualization coordinate information”). Since the visualized coordinate information is associated with the coordinate information, a point in the visualized coordinate information corresponds to a point in an actual space, and a surface in the visualized coordinate information corresponds to a surface in an actual space. Therefore, when the user specifies the position of a certain point in the visualization data, the corresponding position of the point in the actual room can be specified. Further, a conversion function for converting the coordinate system may be prepared so that the coordinate system in the visualization data and the coordinate system in the spatial data can be mutually converted. Of course, the coordinate system in the visualization data and the coordinate system in the actual space may be interchangeable.
 可視化データ生成部14は、可視化データを立体的(3D(Dimensions))データで生成する。また、可視化データ生成部14は、可視化データを平面的(2D)データで生成してもよい。可視化データを3Dで生成することにより、利用者が家具等の形状を確認しやすくすることができる。可視化データ生成部14は、空間データ生成部13において、可視化データを3Dで生成するために十分なデータが生成された場合に可視化データを3Dで生成するようにしてもよい。可視化データ生成部14は、利用者によって指定された3Dの視点位置(視点高さ、視点仰俯角等)によって可視化データを3Dで生成するようにしてもよい。視点位置を指定可能とすることにより、利用者が家具等の形状を確認しやすくすることができる。また、可視化データ生成部14は、部屋の壁または天井については、奥側の壁についてのみ着色し、手前側の壁または天井を透明にした(着色しない)可視化データを生成してもよい。手前側の壁を透明にすることにより、利用者が手前側の壁の先(室内)に配置された家具等の形状を確認しやすくすることができる。 The visualization data generation unit 14 generates visualization data as three-dimensional (3D (Dimensions)) data. In addition, the visualization data generation unit 14 may generate the visualization data as two-dimensional (2D) data. By generating the visualization data in 3D, the user can easily check the shape of furniture and the like. The visualization data generation unit 14 may generate the visualization data in 3D when the spatial data generation unit 13 generates enough data to generate the visualization data in 3D. The visualization data generation unit 14 may generate the visualization data in 3D based on the 3D viewpoint position (viewpoint height, viewpoint elevation / depression angle, etc.) specified by the user. By making it possible to specify the viewpoint position, it is possible for the user to easily check the shape of furniture or the like. In addition, the visualization data generation unit 14 may generate visualization data in which only the back wall or ceiling of the room is colored, and the front wall or ceiling is transparent (not colored). By making the wall on the near side transparent, it is possible for the user to easily check the shape of furniture and the like arranged at the end (in the room) of the wall on the near side.
 可視化データ生成部14は、撮影画像取得部111において取得された撮影画像に応じた色属性を付与した可視化データを生成する。例えば、可視化データ生成部14は、撮影画像に木目調の家具が含まれ、木目の色(例えば、茶色)を検出した場合、抽出された家具の図形に検出した色に近似した色を付与した可視化データを生成する。撮影画像に応じた色属性を付与することにより、利用者が家具等の種別を確認しやすくすることができる。 The visualization data generation unit 14 generates visualization data to which a color attribute according to the captured image acquired by the captured image acquisition unit 111 is added. For example, when the captured image includes woodgrain furniture and detects a woodgrain color (for example, brown), the visualization data generation unit 14 assigns a color similar to the detected color to the extracted furniture graphic. Generate visualization data. By giving the color attribute according to the captured image, it is possible for the user to easily confirm the type of the furniture or the like.
 可視化データ生成部14は、固定されている固定物と、移動する移動物との描画方法を変更した可視化データを生成する。固定物とは、例えば、部屋の壁、段差、固定されている家具等である。移動物とは、例えば、椅子、ごみ箱、キャスター付き家具等である。また、移動物には、例えば、荷物やカバン等の一時的に床に置かれた一時物を含んでいてもよい。描画方法とは、例えば、平面に付与する色相、明度または彩度等の色属性、平面に付与する模様、またはテクスチャ等である。 (4) The visualization data generation unit 14 generates visualization data in which a drawing method of a fixed object that is fixed and a moving object that moves is changed. The fixed object is, for example, a room wall, a step, fixed furniture, or the like. The moving object is, for example, a chair, a trash can, furniture with casters, or the like. In addition, the moving object may include, for example, temporary objects such as luggage and bags temporarily placed on the floor. The drawing method is, for example, a color attribute such as hue, lightness or saturation to be given to a plane, a pattern to be given to a plane, or a texture.
 固定物、移動物または一時物の区分は、その場所に存在している期間によって識別することができる。例えば、空間データ生成部13は、点群データ生成部12において生成された点群データの経時的な変化に基づき、空間要素が固定物、移動物または一時物の区分を識別して空間データを生成する。空間データ生成部13は、例えば、第1の時刻において生成した空間データと、第2の時刻において生成した空間データの差分から、空間要素が変化していない場合に空間要素が固定物であると判断する。また、空間データ生成部13は、空間データの差分から、空間要素の位置が変化している場合に空間要素が移動物であると判断してもよい。また、空間データ生成部13は、空間データの差分から、空間要素が無くなっている場合または出現した場合に空間要素が一時物であると判断してもよい。可視化データ生成部14は、空間データ生成部13において識別された区分に基づき描画方法を変更する。描画方法の変更とは、例えば、色分け、ハッチングの追加または所定のマークの追加等である。例えば、空間データ生成部13は、固定物を黒で表示し、移動物を青で表示し、または、一時物を黄で表示するようにしてもよい。空間データ生成部13は、固定物、移動物または一時物の区分を識別して空間データを生成する。可視化データ生成部14は、空間データ生成部13において識別された区分に基づき描画方法を変更した可視化データを生成してもよい。また、空間データ生成部13は、画像認識で認識された空間要素の描画方法を変更した可視化データを生成してもよい。 区分 The classification of fixed objects, moving objects or temporary objects can be identified by the period of their existence at the place. For example, the spatial data generation unit 13 identifies the space element as a fixed object, a moving object, or a temporary object based on a change with time of the point cloud data generated by the point cloud data generation unit 12 and converts the space data. Generate. For example, the spatial data generation unit 13 determines that the spatial element is a fixed object when the spatial element has not changed from the difference between the spatial data generated at the first time and the spatial data generated at the second time. to decide. Further, the spatial data generation unit 13 may determine from the difference in the spatial data that the spatial element is a moving object when the position of the spatial element has changed. Further, the spatial data generation unit 13 may determine from the difference in the spatial data that the spatial element is a temporary when the spatial element has disappeared or has appeared. The visualization data generation unit 14 changes the drawing method based on the section identified by the spatial data generation unit 13. The change of the drawing method is, for example, color coding, addition of hatching, addition of a predetermined mark, or the like. For example, the spatial data generation unit 13 may display a fixed object in black, display a moving object in blue, or display a temporary object in yellow. The spatial data generating unit 13 generates spatial data by identifying a fixed object, a moving object, or a temporary object. The visualization data generation unit 14 may generate the visualization data in which the drawing method is changed based on the section identified by the spatial data generation unit 13. Further, the spatial data generation unit 13 may generate visualization data in which the rendering method of the spatial element recognized by the image recognition is changed.
 可視化データ生成部14は、複数に区分されたエリアにおける可視化データを生成することができる。例えば、可視化データ生成部14は、リビングルーム、寝室、ダイニングルーム、廊下等の壁で仕切られた空間をひとつの部屋としてそれぞれ可視化データを生成する。部屋毎に可視化データを生成することにより、例えば、空間データまたは可視化データの生成を部屋ごとに分けて行うことが可能となり、空間データ等の生成が容易になる。また、ロボット2が移動する可能性があるエリアのみについて空間データ等を作成することが可能となる。可視化データ提供部161は、利用者がエリアを選択可能な可視化データを提供する。可視化データ提供部161は、例えば、利用者が選択したエリアの可視化データを拡大して、または利用者が選択したエリアの詳細な可視化データを提供するようにしてもよい。 The visualization data generation unit 14 can generate visualization data in a plurality of divided areas. For example, the visualization data generation unit 14 generates the visualization data with a space partitioned by walls such as a living room, a bedroom, a dining room, and a corridor as one room. By generating the visualization data for each room, for example, the generation of the spatial data or the visualization data can be performed separately for each room, and the generation of the spatial data or the like becomes easy. Further, it is possible to create spatial data and the like only for an area where the robot 2 may move. The visualization data providing unit 161 provides visualization data from which a user can select an area. The visualization data providing unit 161 may, for example, enlarge the visualization data of the area selected by the user or provide detailed visualization data of the area selected by the user.
 撮影対象認識部15は、撮影画像取得部111において取得された撮影画像に基づき、空間要素を画像認識する。空間要素の認識は、例えば機械学習において蓄積された画像認識結果に基づき空間要素が何であるかを判断する画像認識エンジンを用いることにより実行することができる。空間要素の画像認識は、例えば、空間要素の形状、色、模様等において、認識することができる。撮影対象認識部15は、例えば図示しないクラウドサーバにおいて提供される画像認識サービスを利用することにより空間要素を画像認識できるようにしてもよい。可視化データ生成部14は、撮影対象認識部15において画像認識された空間要素に応じて描画方法を変更した可視化データを生成する。例えば、画像認識された空間要素がソファーであった場合、可視化データ生成部14は、空間要素に布の質感を有するテクスチャを付与した可視化データを生成する。また、画像認識された空間要素が壁であった場合、可視化データ生成部14は、壁紙の色属性(例えば白色)を付与した可視化データを生成してもよい。このような可視化処理を施すことで、利用者はロボット2における空間の認識状態を直観的に把握できる。 The photographing target recognizing unit 15 recognizes a spatial element based on the photographed image acquired by the photographed image acquiring unit 111. Recognition of a spatial element can be performed, for example, by using an image recognition engine that determines what the spatial element is based on image recognition results accumulated in machine learning. The image recognition of the spatial element can be performed based on, for example, the shape, color, and pattern of the spatial element. The imaging target recognition unit 15 may be configured to be able to perform image recognition of a spatial element by using, for example, an image recognition service provided in a cloud server (not shown). The visualization data generation unit 14 generates visualization data in which the drawing method is changed according to the spatial element whose image has been recognized by the imaging target recognition unit 15. For example, when the spatial element whose image has been recognized is a sofa, the visualization data generation unit 14 generates visualization data in which a texture having a texture of cloth is added to the spatial element. When the space element whose image has been recognized is a wall, the visualization data generation unit 14 may generate visualization data to which a color attribute of wallpaper (for example, white) is added. By performing such visualization processing, the user can intuitively grasp the recognition state of the space in the robot 2.
 第2通信制御部16は、利用者が所有する利用者端末3との通信を制御する。利用者端末3は、例えば、スマートフォン、タブレットPC、ノートPC、デスクトップPC等である。利用者端末3との通信方式は任意であり、例えば、無線LAN、Bluetooth(登録商標)、または赤外線通信等の近距離無線通信、もしくは有線通信を用いることができる。第2通信制御部16が有する、可視化データ提供部161、指定取得部162および実行情報取得部163の各機能は、第2通信制御部16において制御される通信機能を用いて利用者端末3と通信する。 The second communication control unit 16 controls communication with the user terminal 3 owned by the user. The user terminal 3 is, for example, a smartphone, a tablet PC, a notebook PC, a desktop PC, or the like. The communication method with the user terminal 3 is arbitrary, and for example, wireless LAN, Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used. Each function of the visualization data providing unit 161, the designation acquisition unit 162, and the execution information acquisition unit 163 included in the second communication control unit 16 communicates with the user terminal 3 using the communication function controlled by the second communication control unit 16. connect.
 可視化データ提供部161は、可視化データ生成部14において生成された可視化データを利用者端末3に対して提供する。可視化データ提供部161は、例えば、Webサーバであり、利用者端末3のブラウザに対してWebページとして可視化データを提供する。可視化データ提供部161は、複数の利用者端末3に対して可視化データを提供するようにしてもよい。利用者は利用者端末3に表示された可視化データを視認することにより、ロボット2が移動可能な範囲を2D又は3Dの表示として確認することができる。可視化データには、家具等の形状が所定の描画方法において描画されている。利用者は利用者端末3を操作することにより、例えば、2D表示と3D表示の切り替え、可視化データのズームインもしくはズームアウト、または3D表示における視点の移動を行うことができる。 The visualization data providing unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3. The visualization data providing unit 161 is, for example, a Web server, and provides the browser of the user terminal 3 with the visualization data as a Web page. The visualization data providing unit 161 may provide the visualization data to a plurality of user terminals 3. By visually recognizing the visualization data displayed on the user terminal 3, the user can confirm the range in which the robot 2 can move as a 2D or 3D display. In the visualization data, shapes of furniture and the like are drawn by a predetermined drawing method. By operating the user terminal 3, the user can, for example, switch between 2D display and 3D display, zoom in or out on the visualized data, or move the viewpoint in 3D display.
 利用者は、利用者端末3に表示された可視化データを視認し、空間データの生成状態やエリアの属性情報を確認することができる。利用者は可視化データの中から空間データが生成されていない領域を指定して、空間データの作成を指示することができる。また、利用者は、利用者端末3に表示された可視化データを視認し、家具等の空間要素の形状が不自然である等、空間データが不正確であると思われる領域があれば、その領域を指定して、空間データの再生成を指示することができる。上述のように、可視化データにおける可視化座標情報は、空間データの座標情報と対応付けられているため、利用者によって再生成が指定された可視化データにおける領域は、空間データにおける領域に一意に特定できる。再生成された空間データに基づいて可視化データ生成部14において可視化データが再生成されて可視化データ提供部161から提供される。なお、再生成された可視化データにおいても空間要素が誤認識されている等、空間データの生成状態が変化しない場合がある。その場合、利用者は、ロボット2の動作パラメータを変化させることにより、空間データの生成を指示するようにしてもよい。動作パラメータとは、例えば、ロボット2における撮影部21における撮影条件(露光量またはシャッター速度等)、図示しないセンサの感度、ロボット2の移動を許可する際のクリアランス条件等である。動作パラメータは、例えばエリアの属性情報として空間データに含めるようにしてもよい。 (4) The user can visually recognize the visualized data displayed on the user terminal 3 and check the generation state of the spatial data and the attribute information of the area. The user can designate a region in which no spatial data has been generated from the visualized data and instruct creation of the spatial data. Further, the user visually recognizes the visualized data displayed on the user terminal 3 and, if there is an area in which the spatial data is considered to be inaccurate, such as an unnatural shape of a space element such as furniture, is displayed. A region can be designated to instruct the regeneration of spatial data. As described above, since the visualized coordinate information in the visualized data is associated with the coordinate information of the spatial data, the region in the visualized data designated to be regenerated by the user can be uniquely specified as the region in the spatial data. . The visualization data is regenerated in the visualization data generation unit 14 based on the regenerated spatial data, and is provided from the visualization data providing unit 161. It should be noted that the generated state of the spatial data may not change even if the spatial element is erroneously recognized even in the regenerated visualized data. In this case, the user may instruct the generation of the spatial data by changing the operation parameter of the robot 2. The operation parameters include, for example, photographing conditions (exposure amount or shutter speed) of the photographing unit 21 of the robot 2, sensitivity of a sensor (not shown), clearance conditions for permitting the movement of the robot 2, and the like. The operation parameters may be included in the spatial data as area attribute information, for example.
 可視化データ生成部14は、例えば、空間データの作成(「再作成」を含む。)を指示するボタンの表示を含む可視化データを生成する。利用者端末3は、表示されたボタンを利用者が操作することにより、自律行動型ロボット1に対して空間データの作成の指示を送信することができる。利用者端末3から送信された空間データの作成指示は、指定取得部162において取得される。 The visualization data generation unit 14 generates, for example, visualization data including display of a button for instructing creation (including “re-creation”) of spatial data. The user terminal 3 can transmit an instruction for creating spatial data to the autonomous robot 1 by operating the displayed button by the user. The instruction to create spatial data transmitted from the user terminal 3 is acquired by the designation acquisition unit 162.
 指定取得部162は、可視化データ提供部161において提供された可視化データに基づき利用者に指定された領域の空間データの作成の指示を取得する。指定取得部162は、エリアの属性情報を設定(変更を含む)する指示を取得してもよい。また、指定取得部162は、領域の位置と、ロボットがその領域にアプローチする際の方向、つまり撮影すべき方向を取得する。作成の指示の取得は、例えば可視化データ提供部161において提供されたWebページの操作において実行することができる。これにより、利用者は、ロボット2がどのように空間を認識しているのかを把握し、認識状態に応じて、認識処理のやり直しをロボット2に指示することができる。 The designation acquisition unit 162 acquires an instruction to create spatial data of the area designated by the user based on the visualization data provided by the visualization data providing unit 161. The designation acquisition unit 162 may acquire an instruction to set (including change) the area attribute information. In addition, the designation acquisition unit 162 acquires the position of the region and the direction when the robot approaches the region, that is, the direction to be photographed. Acquisition of the creation instruction can be executed, for example, by operating the Web page provided by the visualization data providing unit 161. Thereby, the user can grasp how the robot 2 recognizes the space, and can instruct the robot 2 to perform the recognition process again according to the recognition state.
 [追加機能]
 状態情報取得部24は、探索部25において探索された報知対象者の状態に係る状態情報を取得する。報知対象者の状態とは、報知対象者の動作の状態であり、例えば、睡眠中の状態、起床中の状態、座った状態、立っている状態、歩行している状態、または、調理もしくは掃除等の作業中の状態等である。報知対象者の状態として、報知対象者の健康状態又は精神状態などが取得されてもよい。状態情報取得部24は、例えば、撮影部21で撮影された報知対象者の撮影画像に基づき、報知対象者の状態を判断する。状態情報取得部24は、図示しないマイクで集音した報知対象者が発する音声、放射温度計で計測された報知対象者の熱分布、照度計で計測された部屋の照度、近接センサで検出された報知対象者の動き、報知対象が身につけている心拍数計から取得する心拍数等に基づき、報知対象者の状態を判断してもよい。例えば、状態情報取得部24は、これらの情報から報知対象者の睡眠状態を判断してもよい。睡眠状態とは、例えば、人の心拍数、身体の動き等から測定できる、レム睡眠、ノンレム睡眠、睡眠の深さ等の状態である。
[Additional functions]
The state information acquisition unit 24 acquires state information relating to the state of the notification target person searched by the search unit 25. The state of the notification target person is the state of the motion of the notification target person, and is, for example, a sleeping state, a rising state, a sitting state, a standing state, a walking state, or cooking or cleaning. And the like during operation. As the state of the notification target person, a health state or a mental state of the notification target person may be acquired. The state information acquisition unit 24 determines the state of the notification target person based on, for example, a captured image of the notification target person captured by the imaging unit 21. The state information acquisition unit 24 is a sound emitted by the notification target person collected by a microphone (not shown), a heat distribution of the notification target person measured by the radiation thermometer, the illuminance of the room measured by the illuminometer, and detected by the proximity sensor. The state of the notification target person may be determined based on the movement of the notification target person, the heart rate acquired from the heart rate meter worn by the notification target, and the like. For example, the state information acquisition unit 24 may determine the sleep state of the notification target person from these pieces of information. The sleep state is, for example, a state such as REM sleep, non-REM sleep, and the depth of sleep that can be measured from a person's heart rate, body movement, and the like.
 状態情報取得部24は、例えば、撮影部21で撮影された報知対象者の撮影画像、報知対象者の熱分布、または近接センサで検出された報知対象者における報知対象者の動きが、予め定められた時間および動作範囲において動きが小さいと判断した場合、報知対象者は睡眠中の状態であると判断する。また、状態情報取得部24は、報知対象者の動きが、予め定められた時間および動作範囲において報知対象者の動作量(例えば、時間当たりの報知対象者の移動量の積算値又は平均値)が所定の動作量よりも大きいと判断した場合、報知対象者は起床中の状態であると判断してもよい。所定の動作量は、例えば、実験などにより定めれられた値であってもよい。また、状態情報取得部24は、報知対象者の姿勢、または、椅子等の空間要素と報知対象者の位置関係から報知対象者が座った状態であると判断してもよい。また、状態情報取得部24は、報知対象者の姿勢から報知対象者が立った状態であると判断してもよい。また、状態情報取得部24は、報知対象者の空間における位置が移動している場合、報知対象者は歩行中の状態であると判断してもよい。また、状態情報取得部24は、撮影画像における報知対象者の動作と予め学習されている動作パターンとの比較から報知対象者の作業内容を判断して作業中であると判断してもよい。 For example, the state information acquisition unit 24 determines in advance the captured image of the notification target person captured by the imaging unit 21, the heat distribution of the notification target person, or the movement of the notification target person in the notification target person detected by the proximity sensor. When it is determined that the movement is small during the set time and the operation range, the notification target person is determined to be in a sleeping state. In addition, the state information acquisition unit 24 determines whether the movement of the notification target person is the movement amount of the notification target person (for example, the integrated value or the average value of the movement amount of the notification target person per time) in a predetermined time and operation range. Is larger than the predetermined amount of movement, the notification target person may be determined to be in the state of getting up. The predetermined operation amount may be, for example, a value determined by an experiment or the like. The state information acquisition unit 24 may determine that the notification target person is in a sitting state from the posture of the notification target person or the positional relationship between a spatial element such as a chair and the notification target person. Further, the state information acquisition unit 24 may determine that the notification target person is in a standing state from the posture of the notification target person. Further, when the position of the notification target person in the space is moving, the state information acquisition unit 24 may determine that the notification target person is in a state of walking. Further, the state information acquisition unit 24 may determine the work content of the notification target person from the comparison between the operation of the notification target person in the captured image and a previously learned operation pattern, and determine that the notification target person is working.
 また、状態情報取得部24は、照明の点灯状態、ドアの開閉状態に基づき、報知対象者の状態を判断するようにしてもよい。例えば、状態情報取得部24は、照度計で計測された部屋の照度から照明が消された状態であると判断した場合、報知対象者は睡眠中の状態であると判断してもよい。 The state information acquisition unit 24 may determine the state of the notification target person based on the lighting state of the lighting and the open / closed state of the door. For example, when the state information acquisition unit 24 determines that the illumination is turned off from the illuminance of the room measured by the illuminometer, the notification target person may determine that the notification target person is in a sleeping state.
 探索部25は、実行情報取得部163において取得された実行情報に基づき、報知対象者を探索する。報知動作実行部26は、探索部25において探索された報知対象者に対して、実行情報に基づき報知動作を実行する。 The search unit 25 searches for a notification target person based on the execution information acquired by the execution information acquisition unit 163. The notification operation execution unit 26 performs a notification operation on the notification target person searched by the search unit 25 based on the execution information.
 実行情報とは、報知対象者に対して実行すべき情報の報知動作を実行するための情報であり、利用者によって自律行動型ロボット1に対して設定される情報である。実行情報は、例えば、報知動作を実行すべき報知対象者を特定するための報知対象者の情報、報知対象者を探索する場所の情報、報知動作の情報、報知動作を実行する時刻の情報を含む。実行情報は、例えば利用者によって予め設定されて、実行情報取得部163に対して提供される。 Execution information is information for executing a notification operation of information to be performed to a notification target person, and is information set for the autonomous behavior robot 1 by a user. The execution information includes, for example, information of a notification target person for specifying a notification target person to perform the notification operation, information of a place to search the notification target person, information of the notification operation, and information of a time at which the notification operation is performed. Including. The execution information is set in advance by a user, for example, and provided to the execution information acquisition unit 163.
 <報知対象者の情報>
 報知対象者の情報は、報知対象者を特定するための情報であって、例えば、報知対象者の氏名、ID(Identification)、身体的な特徴を示す情報、所持品、服装、またはロボット2との親密度(後述)等の情報である。報知対象者の身体的な特徴を示す情報とは、例えば、報知対象者の顔を認識(顔認識)するための情報、報知対象者の指紋を認識(指紋認識)するための情報、報知対象者の体格を認識(形状認識)するための情報である。報知対象者の所持品もしくは服装等の情報とは、例えば、報知対象者が所有する無線タグの情報、または報知対象者の服装の特徴を示す情報等である。探索部25は、報知対象者の情報に基づき、報知対象者であるか否かを判断することによって報知対象者を特定することができる。例えば、複数の人が1つの部屋に存在している場合、探索部25は、報知対象者の情報に基づき報知対象者を特定することができる。報知対象者の情報は、例えば、報知対象者を特定するIDとともに自律行動型ロボット1に予め記憶しておき、実行情報においてIDのみを指定するようにしてもよい。
<Information of target person>
The information of the notification target is information for identifying the notification target, and includes, for example, the name of the notification target, ID (Identification), information indicating physical characteristics, personal belongings, clothing, or the robot 2. Such as intimacy (described later). The information indicating the physical characteristics of the notification target includes, for example, information for recognizing the face of the notification target (face recognition), information for recognizing the fingerprint of the notification target (fingerprint recognition), and the notification target. This is information for recognizing the physique of the person (shape recognition). The information on the personal belongings, clothing, and the like of the notification target person is, for example, information on a wireless tag owned by the notification target person, information indicating the characteristics of the notification target person's clothing, and the like. The search unit 25 can specify the notification target person by determining whether or not the notification target person is based on the information of the notification target person. For example, when a plurality of people are present in one room, the search unit 25 can specify the notification target person based on the information of the notification target person. The information of the notification target person may be stored in advance in the autonomous behavior robot 1 together with an ID for specifying the notification target person, and only the ID may be specified in the execution information.
 また、実行情報には、一または複数の報知対象者の情報を含めるようにしてもよい。実行情報に複数の報知対象者の情報を含めることにより、ロボット2は、複数の報知対象に対して順次報知動作を実行することが可能となる。なお、報知対象者の情報に報知対象者が含まれている場合、それぞれの報知対象者に対して予め設定された報知動作が実行される。また、複数の報知対象者に対する報知動作の実行順序(優先順位)を実行情報に設定できるようにしてもよい。 (4) The execution information may include information of one or more notification target persons. By including information on a plurality of notification targets in the execution information, the robot 2 can sequentially execute a notification operation on a plurality of notification targets. When the information of the notification target includes the notification target, a notification operation set in advance for each notification target is executed. Further, the execution order (priority) of the notification operation for a plurality of notification target persons may be set in the execution information.
 <探索場所の情報>
 報知対象者を探索する場所(探索場所)の情報は、報知対象者が存在していると予想される場所の情報(場所情報)であり、例えば、ロボット2が移動して報知対象者を探索するための移動先である。場所情報は、例えば、空間における点、線もしくは範囲を示す位置情報、または空間における位置情報が予め登録されている部屋の情報等である。探索部25は、実行情報に含まれる場所情報に基づき移動機構29による移動経路を算出する。移動経路は、ロボット2の現在位置と探索場所から算出することができる。例えば、探索部25は、移動機構29による移動可能な範囲を予め記憶しておき、移動可能な範囲において現在位置から探索場所に最短距離で移動可能な移動経路を算出することができる。探索部25は、移動経路における移動速度を移動経路に含めて算出してもよい。例えば、探索部25は、廊下における移動速度と室内における移動速度を変えて移動するように移動経路を算出することができる。移動制御部23は、探索部25において算出された移動経路に基づき移動機構29を制御してロボット2を移動させる。
<Search location information>
The information on the location (search location) for searching for the notification target is information (location information) of a place where the notification target is expected to exist. For example, the robot 2 moves to search for the notification target. It is a destination to move. The location information is, for example, position information indicating a point, line, or range in the space, or information on a room in which the position information in the space is registered in advance. The search unit 25 calculates a moving route by the moving mechanism 29 based on the location information included in the execution information. The movement route can be calculated from the current position of the robot 2 and the search location. For example, the search unit 25 can store a range in which the moving mechanism 29 can move in advance, and calculate a moving route that can move from the current position to the search location in the shortest distance in the movable range. The search unit 25 may calculate the moving speed in the moving route by including the moving speed in the moving route. For example, the search unit 25 can calculate the moving route so as to move while changing the moving speed in the hallway and the moving speed in the room. The movement control unit 23 controls the movement mechanism 29 based on the movement path calculated by the search unit 25 to move the robot 2.
 また、探索部25は、マーカ認識部22において認識されたマーカにおいて移動が制限されている移動制限範囲に基づき移動経路を算出するようにしてもよい。例えば、マーカに基づき進入禁止の範囲が設定されている場合、探索部25は、進入禁止の範囲を避けるように移動経路を算出する。また、マーカに基づき移動速度が制限されている場合、探索部25は、制限速度を考慮した移動が最短時間になるように移動経路を算出してもよい。 The search unit 25 may calculate the movement route based on the movement restriction range in which the movement of the marker recognized by the marker recognition unit 22 is restricted. For example, when the entry prohibition range is set based on the marker, the search unit 25 calculates the moving route so as to avoid the entry prohibition range. When the movement speed is limited based on the marker, the search unit 25 may calculate the movement route so that the movement taking the speed limit into consideration is the shortest time.
 場所情報は、利用者によって指定されてもよい。例えば、利用者は、利用者が操作する利用者端末3において表示される地図において探索場所を指定することによって場所情報を指定する。実行情報取得部163は、利用者端末3から利用者によって指定された場所情報を取得することができる。探索部25は、利用者端末3から利用者によって指定された場所情報に基づき移動機構29による移動経路を算出する。探索部25は、例えば、ロボット2が充電のために戻るホームポジションから探索場所までの移動経路を算出する。探索部25は、算出した移動経路において移動を制限しているマーカや階段等が存在する場合、利用者端末3においてアラートが表示されるようにしてもよい。すなわち、探索部25は、ロボット2が探索場所に移動する時(移動直前または移動中)に移動経路を算出するようにしてもよく、または、利用者における実行情報の設定時に移動経路を算出するようにしてもよい。 The location information may be specified by the user. For example, the user specifies location information by specifying a search location on a map displayed on the user terminal 3 operated by the user. The execution information acquisition unit 163 can acquire the location information specified by the user from the user terminal 3. The search unit 25 calculates a moving route by the moving mechanism 29 based on the location information specified by the user from the user terminal 3. The search unit 25 calculates, for example, a movement path from the home position where the robot 2 returns for charging to the search location. The search unit 25 may display an alert on the user terminal 3 when there is a marker, a stair, or the like that restricts movement on the calculated movement route. That is, the search unit 25 may calculate the movement path when the robot 2 moves to the search place (immediately before or during movement), or calculates the movement path when the execution information is set by the user. You may do so.
 実行情報には、一または複数の探索場所の情報を含めるようにしてもよい。実行情報に複数の探索場所の情報を含めることにより、ロボット2は、例えば、最初に指定された探索場所において報知対象者が探索(発見)できない場合、指定された他の探索場所において報知対象者を探索することが可能となる。 The execution information may include information on one or more search locations. By including the information of a plurality of search locations in the execution information, the robot 2 can, for example, if the notification target cannot be searched (discovered) at the first specified search location, the robot 2 can be notified at the other specified search location. Can be searched.
 <報知動作の情報>
 報知動作の情報は、報知対象者に対して実行される報知動作の内容を示す情報である。報知動作実行部26は、探索部25において探索された報知対象者に対して、報知動作の情報に基づき報知動作を実行する。報知動作とは、報知部27を用いた報知情報を報知対象者に対して報知する動作である。すなわち、報知動作実行部26は、報知部27を介して報知動作を実行することができる。報知動作の情報には、報知動作を実行する時刻情報が含まれる。探索部25は、時刻情報に応じて報知対象者の探索をする。
<Information of notification operation>
The information on the notification operation is information indicating the content of the notification operation performed on the notification target person. The notifying operation executing unit 26 executes the notifying operation on the notification target person searched by the searching unit 25 based on the information of the notifying operation. The notification operation is an operation of notifying the notification target person of the notification information using the notification unit 27. That is, the notification operation execution unit 26 can execute the notification operation via the notification unit 27. The information of the notification operation includes time information at which the notification operation is performed. The search unit 25 searches for a notification target person according to the time information.
 報知部27は、例えば、スピーカ、表示器またはアクチュエータ等の出力機器である。スピーカは音(音声を含む)によって報知対象者の聴覚に対する情報を報知する。表示器は例えば、ディスプレイまたはライト等であり、表示器に表示される情報(文字、画像または光等)によって報知対象者の視覚に対する情報を報知する。アクチュエータは、例えば、ロボットハンド、振動発生機または圧搾空気出力バルブ等の可動部であり、報知対象者の触覚に対する情報を報知する。なお、報知部27は、報知対象者の嗅覚または味覚に対する情報を報知するものであってもよい。報知動作は、例えば、報知対象者に対するスピーカからの音の出力、もしくは表示器からの表示情報の出力、またはロボットハンドによる報知対象者に対する接触等である。報知動作は、これらの報知動作を組み合わせたものであってもよい。報知動作は、利用者によって指定することができる。報知動作は、例えば、「目覚まし動作」、または「時刻報知動作」等の報知の目的を指定するものであってもよい。また、報知動作実行部26は、報知対象者の情報に応じて報知動作を実行してもよい。例えば、報知対象者の情報として報知対象者の寝起きの良さが記憶されている場合、報知動作実行部26は、報知対象者の寝起きの良さに応じて報知動作を実行する。寝起きの良さは、報知動作を実行してから、報知対象者の状態が睡眠状態以外になるまでの時間に基づいて評価されうる。例えば、報知対象者の情報として報知対象者の寝起きが悪い(例えば、報知動作を実行してから、報知対象者の状態が睡眠状態以外になるまでの時間の平均値が寝起き閾値よりも大きい)と記憶されている場合、報知動作実行部26は、所定の音量以上の音で報知動作を実行してもよい。また、報知動作実行部26は、音量を大きくすることに代えて又は加えて、音を出力する時間を長くなるように制御してもよい。また、報知動作実行部26は、これらに代えて又は加えて、音の種類が第1の種類の音となるように制御してもよい。また、報知動作実行部26は、これらに代えて又は加えて、時間当たりの音量の増加量が比較的大きくなるように制御してもよい。また、報知対象者の寝起きが良い(例えば、報知動作を実行してから、報知対象者の状態が睡眠状態以外になるまでの時間の平均値が寝起き閾値よりも小さい)と記憶されている場合、報知動作実行部26は、報知対象者を驚かせないように、所定の音量以下の音で報知動作を実行するようにしてもよい。また、報知動作実行部26は、音量を大きくすることに代えて又は加えて、音を出力する時間を短くなるように制御してもよい。また、報知動作実行部26は、これらに代えて又は加えて、音の種類が第1の種類の音とは異なる第2の種類の音となるように制御してもよい。また、報知動作実行部26は、これらに代えて又は加えて、時間当たりの音量の増加量が比較的小さくなるように制御してもよい。また、報知動作実行部26は、報知対象者との親密度(後述)に応じて報知動作を変えてもよく、また、過去のデータに基づいて報知動作を変えてもよい。 The notification unit 27 is an output device such as a speaker, a display, or an actuator. The loudspeaker reports information on the hearing of the person to be reported by sound (including voice). The display is, for example, a display, a light, or the like, and notifies information to the notification target person's vision by information (character, image, light, or the like) displayed on the display. The actuator is, for example, a movable unit such as a robot hand, a vibration generator, or a compressed air output valve, and notifies information on the tactile sensation of the notification target person. The notifying unit 27 may notify information on the sense of smell or taste of the notification target person. The notification operation is, for example, output of a sound from a speaker to a notification target person, output of display information from a display, or contact of the notification target person with a robot hand. The notification operation may be a combination of these notification operations. The notification operation can be specified by the user. For example, the notification operation may specify a purpose of notification such as “alarm operation” or “time notification operation”. The notification operation execution unit 26 may execute the notification operation according to the information of the notification target person. For example, when the goodness of getting up of the person to be notified is stored as the information of the person to be notified, the notifying operation executing unit 26 executes the notifying operation according to the goodness of getting up of the person to be notified. The goodness of getting up can be evaluated based on the time from when the notification operation is performed to when the state of the notification target person becomes other than the sleep state. For example, as the information of the notification target person, the notification target person wakes up poorly (for example, the average value of the time from when the notification operation is performed to when the state of the notification target person becomes other than the sleep state is larger than the wake-up threshold). When the notification is stored, the notification operation execution unit 26 may execute the notification operation with a sound of a predetermined volume or more. In addition, instead of or in addition to increasing the volume, the notification operation execution unit 26 may control so as to increase the time for outputting the sound. Alternatively, the notification operation executing unit 26 may control the type of sound to be the first type of sound instead or in addition to the above. Alternatively, the notification operation executing unit 26 may control the increase of the volume per time to be relatively large instead of or in addition to the above. Also, when it is stored that the notification target person is good to wake up (for example, the average value of the time from when the notification operation is performed until the state of the notification target person becomes other than the sleep state is smaller than the wake-up threshold). The notifying operation executing unit 26 may execute the notifying operation with a sound of a predetermined volume or less so as not to surprise the notification target person. In addition, instead of or in addition to increasing the volume, the notification operation execution unit 26 may perform control so as to shorten the time for outputting the sound. Alternatively, the notification operation execution unit 26 may control the type of sound to be a second type of sound different from the first type of sound instead or in addition to the above. In addition, the notification operation executing unit 26 may control the increase of the volume per time to be relatively small instead of or in addition to the above. In addition, the notification operation execution unit 26 may change the notification operation according to the intimacy with the notification target person (described later), or may change the notification operation based on past data.
 [目覚まし動作]
 「目覚まし動作」とは、時刻情報に基づき睡眠中の報知対象者を起床させるための報知動作である。「目覚まし」の報知動作において報知動作実行部26は報知部27のスピーカから目覚まし用の音を出力し、表示器に現在時刻を表示し、またはロボットアームによって報知対象者に接触することによって、報知対象者を起床させる。報知動作実行部26は、探索部25において探索された報知対象者が睡眠中である場合、目覚まし動作を実行させる。報知対象者が睡眠中であるか否かは、状態情報取得部24において判断することができる。報知動作実行部26は、報知対象者が起床するまで目覚まし動作を繰り返すようにしてもよい。報知対象者が起床したか否かは、状態情報取得部24において判断することができる。なお、報知対象者が起床するとは、報知対象者が二度寝(一旦起床した後に再び睡眠中になること)しないこととしてもよい。報知動作実行部26は、状態情報取得部24が二度寝を判断した場合、目覚まし動作を繰り返すようにしてもよい。
[Alarm operation]
The “alarm operation” is a notification operation for waking up a notification target person who is sleeping during sleep based on time information. In the alarming operation of “alarm”, the alarming operation executing unit 26 outputs an alarming sound from the speaker of the alarming unit 27, displays the current time on a display, or contacts the alarming person by a robot arm to notify the alarm. Wake up the subject. The notification operation executing unit 26 causes the alarm target operation to be executed when the notification target person searched by the search unit 25 is sleeping. Whether or not the notification target person is sleeping can be determined by the state information acquisition unit 24. The notification operation execution unit 26 may repeat the wake-up operation until the notification target person wakes up. Whether or not the notification target person has woken up can be determined by the state information acquisition unit 24. It should be noted that the notification target person may wake up not to sleep twice (being once awake and then sleeping again). The notification operation execution unit 26 may repeat the wake-up operation when the state information acquisition unit 24 determines that the user has slept twice.
 [時刻報知動作]
 「時刻報知動作」とは、報知対象者に対して予め指定された時刻を報知する動作である。報知対象者に対する時刻報知動作は、例えばテレビ放送の開始時刻を報知する動作である。報知動作実行部26は、予め設定された時刻になったときに、報知対象者に対してテレビの視聴を促す音声を出力してもよい。また、報知動作実行部26は、予め設定された時刻になったときに、テレビの電源を入れるようにしてもよい。報知動作実行部26は、報知対象者がテレビを視聴していない場合、テレビ番組を録画するようにしてもよい。報知対象者がテレビを視聴しているか否かは、状態情報取得部24が、報知対象者がテレビを視聴する場所に座っているか否かを判断することにより判断することができる。
[Time notification operation]
The “time notification operation” is an operation of notifying a notification target person of a time designated in advance. The time notification operation for the notification target person is, for example, an operation of notifying the start time of the television broadcast. The notification operation execution unit 26 may output a voice prompting the notification target person to view the television at a preset time. In addition, the notification operation execution unit 26 may turn on the power of the television at a preset time. The notification operation execution unit 26 may record a television program when the notification target person is not watching television. Whether or not the notification target is watching the television can be determined by the state information acquisition unit 24 by determining whether or not the notification target is sitting at the place where the television is viewed.
 また、報知対象者に対する時刻報知動作は、報知対象者の外出時刻を報知する動作であってもよい。報知動作実行部26は、予め設定された時刻になったときに、報知対象者に対して外出を促す音声を出力してもよい。また、報知動作実行部26は、予め設定された時刻の後に報知対象者が外出した場合、戸締まりを実行するようにしてもよい。報知対象者が外出したか否かは、状態情報取得部24が、玄関から出たことを判断することにより判断することができる。 時刻 The time notification operation for the notification target person may be an operation for notifying the notification target person of the leaving time. The notification operation execution unit 26 may output a voice prompting the notification target person to go out when a preset time comes. In addition, the notification operation execution unit 26 may execute the door closing when the notification target person goes out after a preset time. Whether or not the notification target person has gone out can be determined by the state information acquisition unit 24 determining that the person has left the entrance.
 <報知動作の完了条件>
 報知動作の情報には、報知動作の完了条件が含まれていてもよい。報知動作の完了条件とは、報知対象に対する報知動作が完了したものとみなす条件である。また、報知動作の完了条件には完了条件が満たされない場合の動作を含めてもよい。例えば、報知動作実行部26は、完了条件が満たされない場合、完了条件が満たされるまで報知動作を継続しまたは繰り返してもよい。完了条件を報知動作の情報に含めることにより、報知対象者の状況に応じた報知動作を実行することが可能となる。報知動作の完了条件は、報知動作の情報として自律行動型ロボット1に提供されてもよく、また、自律行動型ロボット1に予め設定しておくようにしてもよい。報知動作が目覚まし動作であった場合の完了条件を以下に例示する。
<Completion condition of notification operation>
The notification operation information may include a notification operation completion condition. The notification operation completion condition is a condition for regarding that the notification operation for the notification target has been completed. Further, the completion condition of the notification operation may include an operation when the completion condition is not satisfied. For example, when the completion condition is not satisfied, the notification operation execution unit 26 may continue or repeat the notification operation until the completion condition is satisfied. By including the completion condition in the information of the notification operation, it becomes possible to execute the notification operation according to the situation of the notification target person. The completion condition of the notification operation may be provided to the autonomous behavior robot 1 as information of the notification operation, or may be set in the autonomous behavior robot 1 in advance. The following is an example of a completion condition when the notification operation is a wake-up operation.
 例えば、完了条件を、「探索場所において報知対象者が起床したと判断した場合」としてもよい。探索場所において報知対象者が起床したと判断した場合、報知動作の目的は達成されたため、報知動作を完了することができる。報知対象者が起床したか否かは、上述のように状態情報取得部24において判断することができる。 For example, the completion condition may be “when it is determined that the notification target person has woken up at the search location”. When it is determined that the notification target person has woken up at the search location, the notification operation has been achieved, and thus the notification operation can be completed. Whether or not the notification target person has woken up can be determined by the state information acquisition unit 24 as described above.
 また、完了条件を、「探索場所において報知対象者を探索できない場合」としてもよい。探索場所において報知対象者を探索できない場合、報知対象者は既に起床しているものとみなすことができるため、報知動作を完了することができる。探索場所において報知対象者を探索できたか否かは、状態情報取得部24において判断することができる。 完了 Further, the completion condition may be “when the notification target person cannot be searched at the search place”. When the notification target person cannot be searched at the search location, the notification target person can be regarded as having already woken up, and thus the notification operation can be completed. Whether or not the notification target person has been searched at the search place can be determined by the state information acquisition unit 24.
 また、完了条件を、「探索場所以外の場所において報知対象者を発見した場合」としてもよい。探索場所以外の場所とは、例えば、ロボット2の移動経路における場所である。探索場所以外の場所において報知対象者を発見した場合、報知対象者は既に起床しているものとみなすことができるため、報知動作を完了することができる。報知対象者の発見は、例えば、状態情報取得部24において、撮影画像に対する報知対象者の画像認識、または報知対象者からロボット2に対する声掛け音声に対する音声認識等をすることによって実行することができる。 (4) The completion condition may be “when a notification target person is found in a place other than the search place”. The place other than the search place is, for example, a place on the movement route of the robot 2. When the notification target person is found in a place other than the search place, the notification target person can be regarded as having already woken up, so that the notification operation can be completed. The discovery of the notification target person can be performed, for example, by the state information acquisition unit 24 performing image recognition of the notification target person with respect to the captured image, or voice recognition of the voice to the robot 2 from the notification target person. .
 また、完了条件を、「報知対象者からの快行為が検出された場合」としてもよい。快行為とは、あらかじめ定められた種類の報知対象者の行為であり、例えば、報知対象者がロボット2を「撫でる」行為、またはロボット2に対して「お礼を言う」行為もしくは「挨拶をする」行為等である。報知対象者からの快行為が検出された場合、報知対象者は既に起床しているものとみなすことができるため、報知動作を完了することができる。報知対象者からの快行為の検出は、例えば、状態情報取得部24において実行することができる。 完了 Also, the completion condition may be “when a pleasant action from the notification target person is detected”. The pleasant action is an action of the notification target person of a predetermined type. For example, the notification target person “strokes” the robot 2 or “acts to thank” or “greets” the robot 2. Act. When a pleasant action from the notification target person is detected, the notification target person can be regarded as having already woken up, so that the notification operation can be completed. The detection of a pleasant act from the notification target person can be performed by, for example, the state information acquisition unit 24.
 <報知動作の責任レベル>
 報知動作の情報には、報知動作の責任レベルが含まれていてもよい。報知動作の責任レベルとは、報知対象者に対する報知動作の重要度を示す情報である。例えば、責任レベルが高レベルである場合、報知動作実行部26は、報知対象者の状態が所定の状態になるまで報知動作を繰り返して実行する。例えば、目覚まし動作において責任レベルが高レベルである場合、報知動作実行部26は、報知対象者が起床するまで目覚まし動作を継続する。責任レベルが高レベルである場合、ロボット2は、早めに報知対象者の探索を開始して、設定された時刻において確実に目覚まし動作を実行できるようにしてもよい。また、責任レベルが前記高レベルよりも低い中レベルである場合、報知動作実行部26は、予め設定された回数において目覚まし動作を実行するようにしてもよい。責任レベルが中レベルの場合、ロボット2は、設定された時刻から予め定められた時間の前において報知対象者の探索を開始してもよい。また、責任レベルが前記中レベルよりも低い低レベルである場合、報知動作実行部26は、1回のみ目覚まし動作を実行するようにしてもよい。責任レベルが低レベルの場合、ロボット2は、他の報知動作(例えば、他の報知対象に対する目覚まし動作)の実行が終了してから報知対象者の探索を開始してもよい。
<Responsibility level of notification operation>
The information of the notification operation may include a responsibility level of the notification operation. The responsibility level of the notification operation is information indicating the importance of the notification operation to the notification target person. For example, when the responsibility level is high, the notification operation execution unit 26 repeatedly executes the notification operation until the state of the notification target person becomes a predetermined state. For example, when the responsibility level is high in the wake-up operation, the notification operation execution unit 26 continues the wake-up operation until the notification target person wakes up. When the responsibility level is high, the robot 2 may start searching for the notification target person early so that the wake-up operation can be reliably performed at the set time. When the responsibility level is a medium level lower than the high level, the notification operation executing unit 26 may execute the wake-up operation a preset number of times. When the responsibility level is the middle level, the robot 2 may start searching for the notification target person before a predetermined time from the set time. When the responsibility level is lower than the middle level, the notification operation executing section 26 may execute the wake-up operation only once. When the responsibility level is low, the robot 2 may start searching for a notification target person after execution of another notification operation (for example, a wake-up operation for another notification target) ends.
 <報知対象者とロボットの親密度>
 報知対象者とロボット2の親密度とは、報知対象者がロボット2に対して感じる主観的な感情を指数化した情報である。例えば、報知対象者は、ロボット2の形状(例えば、人型または動物型等の形状)、ロボット2が出力する音声、ロボット2の動作等によってロボット2に対して親しみの感情を持つ場合がある。また、報知対象者は、ロボット2との過去の触れ合い等による経験に基づき親しみの感情を持つ場合がある。親密度は、報知対象者の感情に基づき、例えば、百分率(0~100%)または複数レベル(S、A、BおよびCレベル)等で表現される。自律行動型ロボット1は、親密度を記憶し、さらに報知対象者の過去の行動や意見に応じて更新するようにしてもよい。例えば、自律行動型ロボット1は、報知対象がロボット2に対して快行為を行った場合、親密度を高くするように更新してもよい。
<Intimacy between the notification target and the robot>
The intimacy between the notification target person and the robot 2 is information obtained by indexing the subjective feeling that the notification target person feels about the robot 2. For example, the notification target person may have a feeling of familiarity with the robot 2 depending on the shape of the robot 2 (for example, a shape of a human or an animal), a voice output by the robot 2, an operation of the robot 2, and the like. . In addition, the notification target person may have a familiar feeling based on the experience of past contact with the robot 2 or the like. The intimacy level is expressed, for example, as a percentage (0 to 100%) or a plurality of levels (S, A, B, and C levels) based on the emotion of the notification target person. The autonomous behavior robot 1 may store the intimacy and update it in accordance with the past actions and opinions of the notification target person. For example, the autonomous behavior robot 1 may be updated so as to increase intimacy when the notification target performs a pleasant act on the robot 2.
 複数の報知対象者が存在する場合、親密度は報知対象者毎に設定されてもよい。また、複数のロボット2が存在する場合、親密度はロボット2毎に設定されてもよい。例えば、報知対象者Aの1名が存在し、ロボット2がロボット2a~ロボット2bの2台存在する場合、ロボット2aとロボット2bのそれぞれが、報知対象者Aとの親密度を設定できる。 場合 When there are a plurality of notification targets, the intimacy may be set for each notification target. When a plurality of robots 2 exist, the intimacy level may be set for each robot 2. For example, when there is one notification target person A and two robots 2 including the robots 2a and 2b, each of the robots 2a and 2b can set the degree of intimacy with the notification target person A.
 実行情報に基づく報知動作は、報知対象者とロボット2の親密度に応じて実行されるようにしてもよい。例えば、上述の報知対象者Aに対する報知動作において、親密度が高いロボット2aが報知対象者に対する接触を含む報知動作を実行するのに対して、親密度が低いロボット2bが報知対象者に対する音声出力の報知動作のみを実行するようにしてもよい。また、親密度が低いロボット2bに対する報知動作が設定された場合、親密度が高いロボット2aが報知動作をロボット2bと共同して、またはロボット2に代替して実行するようにしてもよい。また、ロボット2aの親密度とロボット2bの親密度がともに高い場合、ロボット2aとロボット2bが共同してまたは競い合うように報知動作を実行するようにしてもよい。親密度に応じて報知動作を実行することにより、報知対象者に応じた報知動作を実行することが可能となる。 The notification operation based on the execution information may be performed according to the intimacy between the notification target person and the robot 2. For example, in the above-described notification operation to the notification target person A, while the robot 2a having a high degree of closeness performs the notification operation including contact with the notification target person, the robot 2b having a low degree of closeness outputs a voice output to the notification target person. May be executed only. In addition, when the notification operation is set for the robot 2b with low intimacy, the robot 2a with high intimacy may execute the notification operation in cooperation with the robot 2b or instead of the robot 2. When both the intimacy of the robot 2a and the intimacy of the robot 2b are high, the notification operation may be performed such that the robot 2a and the robot 2b cooperate or compete with each other. By performing the notification operation according to the intimacy level, it is possible to execute the notification operation according to the notification target person.
 実行情報取得部163は、報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する。実行情報取得部163は、実行情報を利用者端末3から取得することができる。実行情報取得部163は、利用者端末3を操作する利用者の操作によって利用者端末3から送信される実行情報を受信することで実行情報を取得してもよい。また、実行情報取得部163は、利用者端末3の記憶部に記憶された実行情報をダウンロードすることにより実行情報を取得してもよい。また、実行情報取得部163は、実行情報を図示しないデータサーバから取得するようにしてもよい。 The execution information acquisition unit 163 acquires execution information for performing a notification operation of information to be performed to a notification target person. The execution information acquisition unit 163 can acquire execution information from the user terminal 3. The execution information acquisition unit 163 may acquire execution information by receiving execution information transmitted from the user terminal 3 by an operation of the user who operates the user terminal 3. Further, the execution information acquisition unit 163 may acquire the execution information by downloading the execution information stored in the storage unit of the user terminal 3. Further, the execution information acquisition unit 163 may acquire the execution information from a data server (not shown).
 また、実行情報取得部163は、可視化データ提供部161において提供された可視化データに基づき利用者に提供された可視化データに基づき、利用者端末3において表示された地図を利用者が操作することによって指定された場所情報を利用者端末から取得することができる。 Further, the execution information acquisition unit 163 is configured to operate the map displayed on the user terminal 3 by the user based on the visualization data provided to the user based on the visualization data provided by the visualization data providing unit 161. The specified location information can be obtained from the user terminal.
 なお、上述のように、図1では自律行動型ロボット1は、データ提供装置10とロボット2とが分離されて構成される場合を説明したが、データ提供装置10の機能は、ロボット2の機能に含まれるものであってもよい。例えば、ロボット2は、データ提供装置10の機能を全て含むものであってもよい。データ提供装置10は、例えば、ロボット2において処理能力が不足する場合に、一時的に機能を代替するものであってもよい。 As described above, FIG. 1 illustrates a case in which the autonomous behavior robot 1 is configured such that the data providing device 10 and the robot 2 are separated from each other. May be included. For example, the robot 2 may include all functions of the data providing device 10. The data providing device 10 may be a device that temporarily substitutes a function when the processing capability of the robot 2 is insufficient, for example.
 また、本実施形態において「取得」とは、取得する主体が能動的に取得するものであってもよく、また、取得する主体が受動的に取得するものであってもよい。例えば、指定取得部162は、利用者が利用者端末3から送信した空間データの作成の指示を受信することにより取得してもよく、また、利用者が図示しない記憶領域(不図示)に記憶させた空間データの作成の指示を記憶領域から読み出すことにより取得してもよい。 Also, in the present embodiment, “acquisition” may mean that the subject to be acquired actively acquires, or the subject to acquire may passively acquire. For example, the designation acquisition unit 162 may acquire the space data by receiving an instruction for creating spatial data transmitted from the user terminal 3 by the user, or store the data in a storage area (not shown) not shown by the user. The instruction for creating the created spatial data may be obtained by reading from the storage area.
 また、データ提供装置10が有する、第1通信制御部11、点群データ生成部12、空間データ生成部13、可視化データ生成部14、撮影対象認識部15、第2通信制御部16、撮影画像取得部111、空間データ提供部112、指示部113、可視化データ提供部161、指定取得部162および実行情報取得部163の各機能部は、本実施形態における自律行動型ロボット1の機能の一例を示したものであり、自律行動型ロボット1が有する機能を限定したものではない。例えば、自律行動型ロボット1は、データ提供装置10が有する全ての機能部を有している必要はなく、一部の機能部を有するものであってもよい。また、自律行動型ロボット1は、上記以外の他の機能部を有していてもよい。また、ロボット2が有する、マーカ認識部22、移動制御部23、状態情報取得部24、探索部25、報知動作実行部26の各機能部は、本実施形態における自律行動型ロボット1の機能の一例を示したものであり、自律行動型ロボット1が有する機能を限定したものではない。例えば、自律行動型ロボット1は、ロボット2が有する全ての機能部を有している必要はなく、一部の機能部を有するものであってもよい。 In addition, the first communication control unit 11, the point cloud data generation unit 12, the spatial data generation unit 13, the visualization data generation unit 14, the imaging target recognition unit 15, the second communication control unit 16, the captured image The functional units of the acquiring unit 111, the spatial data providing unit 112, the instruction unit 113, the visualization data providing unit 161, the designation acquiring unit 162, and the execution information acquiring unit 163 are examples of the function of the autonomous behavior robot 1 in the present embodiment. It is shown, and does not limit the functions of the autonomous behavior robot 1. For example, the autonomous behavior robot 1 does not need to have all the functional units included in the data providing device 10 and may have some functional units. In addition, the autonomous behavior robot 1 may have other functional units other than those described above. Further, each function unit of the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, the search unit 25, and the notification operation execution unit 26 of the robot 2 has the functions of the autonomous behavior robot 1 in the present embodiment. This is an example, and does not limit the functions of the autonomous robot 1. For example, the autonomous behavior robot 1 does not need to have all the functional units of the robot 2 but may have some of the functional units.
 また自律行動型ロボット1が有する上記各機能部は、上述の通り、ソフトウェアによって実現されるものとして説明した。しかし、自律行動型ロボット1が有する上記機能の中で少なくとも1つ以上の機能は、ハードウェアによって実現されるものであっても良い。 The above-described functional units of the autonomous behavior robot 1 have been described as being realized by software as described above. However, at least one or more of the above functions of the autonomous behavior robot 1 may be realized by hardware.
 また、自律行動型ロボット1が有する上記何れかの機能は、1つの機能を複数の機能に分割して実施してもよい。また、自律行動型ロボット1が有する上記何れか2つ以上の機能を1つの機能に集約して実施してもよい。すなわち、図1は、自律行動型ロボット1が有する機能を機能ブロックで表現したものであり、例えば、各機能がそれぞれ別個のプログラムファイルで構成されていることを示すものではない。 In addition, any of the above functions of the autonomous behavior robot 1 may be implemented by dividing one function into a plurality of functions. Further, any two or more of the functions of the autonomous behavior robot 1 may be integrated into one function. That is, FIG. 1 illustrates the functions of the autonomous behavior robot 1 by functional blocks, and does not indicate that each function is configured by a separate program file, for example.
 また、自律行動型ロボット1は、1つの筐体によって実現される装置であっても、ネットワーク等を介して接続された複数の装置から実現されるシステムであってもよい。例えば、自律行動型ロボット1は、その機能の一部または全部をクラウドコンピューティングシステムによって提供されるクラウドサービス等、仮想的な装置によって実現するものであってもよい。すなわち、自律行動型ロボット1は、上記各機能のうち、少なくとも1以上の機能を他の装置において実現するようにしてもよい。また、自律行動型ロボット1は、タブレットPC等の汎用的なコンピュータであってもよく、また機能が限定された専用の装置であってもよい。 The autonomous behavior robot 1 may be a device realized by one housing or a system realized by a plurality of devices connected via a network or the like. For example, the autonomous behavior robot 1 may realize some or all of its functions by a virtual device such as a cloud service provided by a cloud computing system. That is, the autonomous behavior robot 1 may realize at least one or more of the above functions in another device. In addition, the autonomous behavior robot 1 may be a general-purpose computer such as a tablet PC, or may be a dedicated device having limited functions.
 また、自律行動型ロボット1は、その機能の一部または全部をロボット2または利用者端末3において実現するものであってもよい。 The autonomous behavior robot 1 may realize some or all of its functions in the robot 2 or the user terminal 3.
 次に、図2を用いて、自律行動型ロボット1(ロボット2の制御部)のハードウェア構成を説明する。図2は、実施形態における自律行動型ロボット1のハードウェア構成の一例を示すブロック図である。 Next, the hardware configuration of the autonomous behavior robot 1 (control unit of the robot 2) will be described with reference to FIG. FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot 1 according to the embodiment.
 自律行動型ロボット1は、CPU(Central Processing Unit)101、RAM(Random Access Memory)102、ROM(Read Only Memory)103、タッチパネル104、通信I/F(Interface)105、センサ106および時計107を有する。自律行動型ロボット1は、図1で説明した自律行動型ロボット制御プログラムを実行する装置である。 The autonomous behavior robot 1 has a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, a touch panel 104, a communication I / F (Interface) 105, a sensor 106, and a clock 107. . The autonomous behavior robot 1 is a device that executes the autonomous behavior robot control program described with reference to FIG.
 CPU101は、RAM102またはROM103に記憶された自律行動型ロボット制御プログラムを実行することにより、自律行動型ロボット1の制御を行う。自律行動型ロボット制御プログラムは、例えば、自律行動型ロボット制御プログラムを記録した記録媒体、又はネットワークを介したプログラム配信サーバ等から取得されて、ROM103にインストールされ、CPU101から読出されて実行される。 (4) The CPU 101 controls the autonomous behavior robot 1 by executing the autonomous behavior robot control program stored in the RAM 102 or the ROM 103. The autonomous behavior type robot control program is acquired from, for example, a recording medium on which the autonomous behavior type robot control program is recorded, or a program distribution server via a network, installed in the ROM 103, read from the CPU 101, and executed.
 タッチパネル104は、操作入力機能と表示機能(操作表示機能)を有する。タッチパネル104は、自律行動型ロボット1の利用者に対して指先又はタッチペン等を用いた操作入力を可能にする。本実施形態における自律行動型ロボット1は操作表示機能を有するタッチパネル104を用いる場合を説明するが、自律行動型ロボット1は、表示機能を有する表示装置と操作入力機能を有する操作入力装置とを別個有するものであってもよい。その場合、タッチパネル104の表示画面は表示装置の表示画面、タッチパネル104の操作は操作入力装置の操作として実施することができる。なお、タッチパネル104は、ヘッドマウント型、メガネ型、腕時計型のディスプレイ等の種々の形態によって実現されてもよい。 (4) The touch panel 104 has an operation input function and a display function (operation display function). The touch panel 104 enables a user of the autonomous behavior robot 1 to perform an operation input using a fingertip, a touch pen, or the like. A case will be described in which the autonomous behavior robot 1 according to the present embodiment uses the touch panel 104 having an operation display function. However, the autonomous behavior robot 1 has a display device having a display function and an operation input device having an operation input function separately. You may have. In that case, the display screen of the touch panel 104 can be implemented as the display screen of the display device, and the operation of the touch panel 104 can be implemented as the operation of the operation input device. Note that the touch panel 104 may be realized by various forms such as a head-mounted type, glasses type, and wristwatch type displays.
 通信I/F105は、通信用のI/Fである。通信I/F105は、例えば、無線LAN、有線LAN、赤外線等の近距離無線通信を実行する。図2において通信用のI/Fは通信I/F105のみを図示するが、自律行動型ロボット1は複数の通信方式においてそれぞれの通信用のI/Fを有するものであってもよい。通信I/F105は、図示しない撮影部21を制御する制御部または移動機構29を制御する制御部との通信を行うものであってもよい。 The communication I / F 105 is a communication I / F. The communication I / F 105 executes, for example, short-range wireless communication such as a wireless LAN, a wired LAN, and infrared rays. Although only the communication I / F 105 is shown in FIG. 2 as the communication I / F, the autonomous behavior robot 1 may have each communication I / F in a plurality of communication methods. The communication I / F 105 may perform communication with a control unit that controls the imaging unit 21 (not shown) or a control unit that controls the moving mechanism 29.
 センサ106は、撮影部21のカメラ、TOFもしくはサーモカメラ等のハードウェア、マイク、温度計、照度計、または近接センサ等のハードウェアである。これらのハードウェアによって取得されたデータは、RAM102に記憶されて、CPU101で処理される。 The sensor 106 is hardware such as a camera of the photographing unit 21, a TOF or a thermo camera, and hardware such as a microphone, a thermometer, an illuminometer, or a proximity sensor. Data obtained by these hardware is stored in the RAM 102 and processed by the CPU 101.
 時計107は、時刻情報を取得するための内部時計である。時計107で取得された時刻情報は、例えば、報知動作を実行する時刻の確認に使用される。マイク108は、周囲の音を集音する。マイク108は、例えば、報知対象者の音声を集音する。 The clock 107 is an internal clock for acquiring time information. The time information acquired by the clock 107 is used, for example, to confirm the time at which the notification operation is performed. The microphone 108 collects surrounding sounds. The microphone 108 collects, for example, the voice of the notification target person.
 スピーカ109a、表示器109bおよびアクチュエータ109cは、図1において説明した報知部27の具体的なハードウェア例である。スピーカ109aは、音を出力し、表示器109bは表示データを出力し、またアクチュエータ109cは可動部である。なお、報知部27は、スピーカ109a、表示器109bおよびアクチュエータ109c以外のハードウェアを有していてもよい。 The speaker 109a, the display 109b, and the actuator 109c are specific hardware examples of the notification unit 27 described with reference to FIG. The speaker 109a outputs sound, the display 109b outputs display data, and the actuator 109c is a movable unit. The notification unit 27 may have hardware other than the speaker 109a, the display 109b, and the actuator 109c.
 次に、図3を用いて、ロボット制御プログラムの可視化データ提供に係る動作を説明する。図3は、実施形態におけるロボット制御プログラムの動作の第1の例を示すフローチャートである。以下のフローチャートの説明において、動作の実行主体は自律行動型ロボット1であるものとして説明するが、それぞれの動作は、上述した自律行動型ロボット1の各機能において実行される。 Next, the operation of providing the visualization data of the robot control program will be described with reference to FIG. FIG. 3 is a flowchart illustrating a first example of the operation of the robot control program according to the embodiment. In the following description of the flowchart, it is assumed that the execution subject of the operation is the autonomous robot 1, but each operation is executed in each function of the autonomous robot 1 described above.
 図3において、自律行動型ロボット1は、撮影画像を取得したか否かを判断する(ステップS11)。撮影画像を取得したか否かの判断は、撮影画像取得部111がロボット2から、撮影画像を取得したか否かで判断することができる。撮影画像を取得したか否かの判断は、撮影画像の処理単位で判断される。例えば、撮影画像が動画である場合、動画はロボット2から連続して送信されるため、撮影画像を取得したか否かの判断は、取得された動画のフレーム数またはデータ量等が所定の値に達したか否かで行うことができる。撮影画像の取得は、移動式ロボットが主体となって撮影画像を送信するものであっても、撮影画像取得部111が主体となって移動式ロボットから撮影画像を引き取るものであってもよい。撮影画像を取得していないと判断した場合(ステップS11:NO)、自律行動型ロボット1は、ステップS11の処理を繰返し、撮影画像が取得されるのを待機する。 In FIG. 3, the autonomous behavior robot 1 determines whether or not a captured image has been acquired (step S11). The determination as to whether or not a captured image has been acquired can be made based on whether or not the captured image acquisition unit 111 has acquired a captured image from the robot 2. The determination as to whether or not a captured image has been obtained is made in units of processing of the captured image. For example, when the captured image is a moving image, the moving image is continuously transmitted from the robot 2. Therefore, the determination as to whether or not the captured image has been obtained depends on whether the number of frames or the data amount of the obtained moving image is a predetermined value. Can be done depending on whether or not The captured image may be acquired mainly by the mobile robot and transmitting the captured image, or may be acquired by the captured image acquisition unit 111 and taking the captured image from the mobile robot. If it is determined that the photographed image has not been acquired (step S11: NO), the autonomous behavior robot 1 repeats the process of step S11 and waits for the photographed image to be acquired.
 一方、撮影画像を取得したと判断した場合(ステップS11:YES)、自律行動型ロボット1は、点群データを生成する(ステップS12)。点群データの生成は、点群データ生成部12が、例えば、撮影画像中の輝度の変化が大きい点を特徴点として検出し、検出された特徴点に対して三次元の座標を与えることにより実行することができる。特徴点の検出は、例えば、撮影画像に対して微分処理を行い撮影画像の階調の変化を検出し、階調の変化の大きい部分を検出するようにしてもよい。また、特徴点に対する座標の付与は、異なる撮影角度から撮影された同一の特徴点を検出することにより実行してもよい。ステップS11における撮影画像の取得の有無の判断は、複数の方向から撮影された撮影画像を取得したか否かで判断することができる。 On the other hand, if it is determined that the captured image has been acquired (step S11: YES), the autonomous robot 1 generates point cloud data (step S12). The point cloud data is generated by the point cloud data generation unit 12 detecting, for example, a point having a large change in luminance in a captured image as a feature point, and giving three-dimensional coordinates to the detected feature point. Can be performed. The feature point may be detected, for example, by performing a differentiation process on the captured image, detecting a change in the gradation of the captured image, and detecting a portion having a large change in the gradation. The assignment of the coordinates to the feature points may be executed by detecting the same feature point photographed from different photographing angles. The determination as to whether or not a captured image has been obtained in step S11 can be made based on whether or not captured images captured from a plurality of directions have been obtained.
 ステップS12の処理を実行した後、自律行動型ロボット1は、空間データを生成し、マーカを認識する(ステップS13)。空間データの生成は、空間データ生成部13が、例えば、点群データをハフ変換することにより実行することができる。なお、ステップS13の詳細は図4において説明する。 After executing the process of step S12, the autonomous behavior robot 1 generates spatial data and recognizes a marker (step S13). The generation of the spatial data can be executed by the spatial data generating unit 13 performing, for example, Hough transform of the point cloud data. The details of step S13 will be described with reference to FIG.
 ステップS13の処理を実行した後、自律行動型ロボット1は、生成した空間データをロボット2に対して提供する(ステップS14)。ロボット2に対する空間データの提供は、図3に示すように空間データ生成の都度、逐次提供するようにしてもよく、また、ステップS11~ステップS18で示す処理とは非同期に提供するようにしてもよい。空間データを提供されたロボット2は、空間データに基づき移動可能範囲を把握することが可能となる。 (4) After executing the processing in step S13, the autonomous behavior robot 1 provides the generated spatial data to the robot 2 (step S14). As shown in FIG. 3, the spatial data may be provided to the robot 2 every time the spatial data is generated, or may be provided asynchronously with the processing shown in steps S11 to S18. Good. The robot 2 provided with the spatial data can grasp the movable range based on the spatial data.
 ステップS14の処理を実行した後、自律行動型ロボット1は、空間要素を認識するか否かを判断する(ステップS15)。空間要素を認識するか否かの判断は、例えば、撮影対象認識部15に対して空間要素を認識するか否かの設定を行うことにより実行することができる。なお、空間要素を認識すると判断した場合であっても、認識に失敗した場合は、空間要素を認識しないと判断するようにしてもよい。 After executing the process of step S14, the autonomous behavior robot 1 determines whether or not to recognize a space element (step S15). The determination as to whether or not to recognize a spatial element can be performed by, for example, setting whether to recognize a spatial element in the imaging target recognition unit 15. Even if it is determined that the space element is recognized, if the recognition fails, it may be determined that the space element is not recognized.
 空間要素を認識すると判断した場合(ステップS15:YES)、自律行動型ロボット1は、第1可視化データを生成する(ステップS16)。第1可視化データの生成は、可視化データ生成部14において実行することができる。第1可視化データとは、撮影対象認識部15が空間要素を認識した上で生成される可視化データである。例えば、撮影対象認識部15が空間要素をテーブルであると判断した場合、可視化データ生成部14は、テーブルの上面が撮影されておらず点群データを有さない場合であっても、テーブルの上面は平らであるものとして可視化データを生成することができる。また、空間要素が壁であると判断された場合、可視化データ生成部14は、撮影されていない部分も平面であるとして可視化データを生成することができる。 場合 If it is determined that the space element is recognized (step S15: YES), the autonomous behavior robot 1 generates first visualization data (step S16). The generation of the first visualization data can be executed by the visualization data generation unit 14. The first visualization data is visualization data generated after the imaging target recognition unit 15 recognizes a spatial element. For example, when the imaging target recognizing unit 15 determines that the spatial element is a table, the visualization data generation unit 14 determines whether the spatial element is a table even if the upper surface of the table is not imaged and has no point cloud data. The visualization data can be generated as if the top surface were flat. Further, when it is determined that the space element is a wall, the visualization data generation unit 14 can generate the visualization data assuming that a part that is not photographed is also a plane.
 空間要素を認識しないと判断した場合(ステップS15:NO)、自律行動型ロボット1は、第2可視化データを生成する(ステップS17)。第2可視化データの生成は、可視化データ生成部14において実行することができる。第2可視化データとは、撮影対象認識部15が空間要素を認識しないで、すなわち、撮影画像から生成された点群データ及び空間データに基づき生成される可視化データである。自律行動型ロボット1は、空間要素の認識処理を行わないことで、処理負荷を軽減することができる。 場合 If it is determined that the space element is not recognized (step S15: NO), the autonomous behavior robot 1 generates the second visualization data (step S17). The generation of the second visualization data can be executed by the visualization data generation unit 14. The second visualization data is visualization data generated by the imaging target recognition unit 15 without recognizing a spatial element, that is, based on point cloud data and spatial data generated from a captured image. The autonomous behavior robot 1 can reduce the processing load by not performing the recognition processing of the space element.
 ステップS16の処理またはステップS17の処理を実行した後、自律行動型ロボット1は、可視化データを提供する(ステップS18)。可視化データの提供は、可視化データ生成部14において生成された可視化データを可視化データ提供部161が利用者端末3に提供することにより実行される。自律行動型ロボット1は、例えば利用者端末3からの要求に応じて可視化データを生成して提供するようにしてもよい。ステップS18の処理を実行した後、自律行動型ロボット1は、フローチャートで示した動作を終了する。 (4) After executing the processing of step S16 or the processing of step S17, the autonomous behavior robot 1 provides visualization data (step S18). The visualization data is provided by the visualization data providing unit 161 providing the visualization data generated by the visualization data generation unit 14 to the user terminal 3. The autonomous behavior robot 1 may generate and provide visualization data in response to a request from the user terminal 3, for example. After executing the processing of step S18, the autonomous behavior robot 1 ends the operation shown in the flowchart.
 次に、図4を用いて、ロボット制御プログラムの空間データ生成に係る動作を説明する。図4は、実施形態におけるロボット制御プログラムの動作の第2の例を示すフローチャートである。 Next, an operation related to generation of spatial data of the robot control program will be described with reference to FIG. FIG. 4 is a flowchart illustrating a second example of the operation of the robot control program according to the embodiment.
 図4において、自律行動型ロボット1は、空間データを生成する(ステップS131)。空間データの生成は、空間データ生成部13が、例えば、点群データをハフ変換することにより実行することができる。ステップS131を実行した後、自律行動型ロボット1は、マーカを認識したか否かを判断する(ステップS132)。マーカを認識したか否かは、マーカ認識部22が、撮影部21において撮影された撮影画像の中にマーカの画像を認識したか否かで判断することができる。ロボット2は、データ提供装置10に対してマーカの認識結果を通知することができる。 に お い て In FIG. 4, the autonomous behavior robot 1 generates spatial data (step S131). The generation of the spatial data can be executed by the spatial data generating unit 13 performing, for example, Hough transform of the point cloud data. After executing step S131, the autonomous behavior robot 1 determines whether or not the marker has been recognized (step S132). Whether or not the marker has been recognized can be determined based on whether or not the marker recognizing unit 22 has recognized the image of the marker in the image captured by the image capturing unit 21. The robot 2 can notify the data providing device 10 of the marker recognition result.
 マーカを認識したと判断した場合(ステップS132:YES)、自律行動型ロボット1は、ステップS121において生成した空間データに移動が制限される制限範囲を設定する(ステップS133)。 If it is determined that the marker has been recognized (step S132: YES), the autonomous behavior robot 1 sets a limited range in which movement is limited to the spatial data generated in step S121 (step S133).
 ステップS133の処理を実行した後、またはマーカを認識していないと判断した場合(ステップS132:NO)、自律行動型ロボット1は、フローチャートで示したステップS13の提供データ生成の動作を終了する。 後 After executing the processing of step S133, or when determining that the marker has not been recognized (step S132: NO), the autonomous behavior robot 1 ends the operation of generating the provided data of step S13 shown in the flowchart.
 次に、図5を用いて、ロボット制御プログラムの報知動作を説明する。図5は、実施形態における自律行動型ロボット制御プログラムが報知動作として目覚まし動作を実行する際のフローチャートである。 Next, the notification operation of the robot control program will be described with reference to FIG. FIG. 5 is a flowchart when the autonomous behavior robot control program according to the embodiment executes a wake-up operation as a notification operation.
 図5において、自律行動型ロボット1は、実行情報を取得したか否かを判断する(ステップS21)。実行情報を取得したか否かの判断は、実行情報取得部163が利用者端末3から実行情報を取得したか否かで判断することができる。実行情報を取得していないと判断した場合(ステップS21:NO)、自律行動型ロボット1は、ステップS21の処理を繰返し、実行情報が取得されるのを待機する。 In FIG. 5, the autonomous behavior robot 1 determines whether or not the execution information has been acquired (step S21). The determination as to whether the execution information has been acquired can be made based on whether the execution information acquisition unit 163 has acquired the execution information from the user terminal 3. If it is determined that the execution information has not been acquired (step S21: NO), the autonomous behavior robot 1 repeats the process of step S21 and waits for the acquisition of the execution information.
 一方、実行情報を取得したと判断した場合(ステップS21:YES)、自律行動型ロボット1は、移動経路を算出する(ステップS22)。移動経路の算出は、探索部25が実行情報に含まれる場所情報に基づき算出することで実行することができる。 On the other hand, if it is determined that the execution information has been obtained (step S21: YES), the autonomous behavior robot 1 calculates a movement route (step S22). The calculation of the moving route can be executed by the search unit 25 calculating based on the location information included in the execution information.
 ステップS22の処理を実行した後、自律行動型ロボット1は、報知対象者の探索を開始して、また算出された移動経路による移動を開始する(ステップS23)。報知対象者の探索は、探索部25において実行することができる。また、移動は、移動機構29を移動制御部23が制御することにより実行することができる。なお、ステップS23における探索の開始は時刻情報に基づき実行され、自律行動型ロボット1は、指定された実行時刻より前に移動を開始する。例えば、実行時刻が午前6時に設定された場合、自律行動型ロボット1は、実行時刻に報知動作が実行出来るように、移動経路における移動時間を考慮して、午前6時前の時刻にロボット2の移動を開始する。移動を開始する時刻は、自律行動型ロボット1が時刻情報に基づき自動的に判断してもよく、また利用者によって手動的に設定されてもよい。 After executing the processing of step S22, the autonomous behavior robot 1 starts searching for a notification target person and starts moving along the calculated moving route (step S23). The search for the notification target person can be executed by the search unit 25. The movement can be executed by controlling the movement mechanism 29 by the movement control unit 23. The start of the search in step S23 is executed based on the time information, and the autonomous robot 1 starts moving before the designated execution time. For example, when the execution time is set at 6:00 am, the autonomous behavior robot 1 considers the traveling time on the traveling route and takes the robot 2 at a time before 6:00 am so that the notification operation can be performed at the execution time. Start moving. The time at which the movement is started may be automatically determined by the autonomous robot 1 based on the time information, or may be manually set by the user.
 ステップS23の処理を実行した後、自律行動型ロボット1は、探索場所として指定された場所にロボット2が到達したか否かを判断する(ステップS24)。探索場所に到達したか否かは、例えば、移動制御部23が探索場所とロボット2の現在位置(座標位置)を比較することによって判断することができる。探索場所に到達していないと判断した場合(ステップS24:NO)、自律行動型ロボット1は、ステップS24の処理を繰返し、ロボット2が探索場所に到達するのを待機する。 After performing the process of step S23, the autonomous behavior robot 1 determines whether the robot 2 has reached the location specified as the search location (step S24). Whether or not the search location has been reached can be determined, for example, by the movement control unit 23 comparing the search location with the current position (coordinate position) of the robot 2. If it is determined that the robot has not reached the search location (step S24: NO), the autonomous robot 1 repeats the process of step S24, and waits for the robot 2 to reach the search location.
 一方、探索場所に到達したと判断した場合(ステップS24:YES)、自律行動型ロボット1は、報知対象者を発見したか否かを判断する(ステップS25)。報知対象者を発見したか否かは、例えば、探索部25が、実行情報に含まれる報知対象者の情報に基づき判断することができる。なお、報知対象者を発見したか否かは、例えば探索場所において人が存在していた場合、その人を報知対象者であると見做して判断するようにしてもよい。報知対象者を発見していないと判断した場合(ステップS25:NO)、自律行動型ロボット1は、ステップS23の処理に戻り、次の探索場所に対する移動を開始する。すなわち、自律行動型ロボット1は、指定された複数の探索場所を順次探索していき、報知対象者を発見することができる。なお、指定された探索場所において報知対象者が発見出来なかった場合、図示するフローチャートにおける探索動作を終了し、報知対象者が発見出来なかった旨を記録し、または利用者端末3に通知するようにしてもよい。 On the other hand, if it is determined that the robot has arrived at the search place (step S24: YES), the autonomous robot 1 determines whether or not the notification target person has been found (step S25). Whether the notification target person has been found or not can be determined by, for example, the search unit 25 based on the notification target person information included in the execution information. Note that whether or not the notification target person has been found may be determined, for example, when a person is present at the search location, by regarding that person as the notification target person. When it is determined that the notification target person has not been found (step S25: NO), the autonomous behavior robot 1 returns to the process of step S23 and starts moving to the next search location. That is, the autonomous behavior robot 1 can sequentially search for a plurality of designated search locations and find a notification target person. If the notification target person cannot be found at the designated search location, the search operation in the illustrated flowchart is terminated, and the fact that the notification target person was not found is recorded or the user terminal 3 is notified. It may be.
 一方、報知対象者を発見したと判断した場合(ステップS25:YES)、自律行動型ロボット1は、状態情報を取得する(ステップS26)。状態情報の取得は、状態情報取得部24において実行することができる。図示するフローチャートでは、状態情報として、報知対象者が睡眠中であるか起床中であるかの情報を取得するものとする。 On the other hand, if it is determined that the notification target person has been found (step S25: YES), the autonomous behavior robot 1 acquires state information (step S26). The acquisition of the state information can be executed by the state information acquisition unit 24. In the illustrated flowchart, it is assumed that information as to whether the notification target person is sleeping or waking up is acquired as state information.
 ステップS26の処理を実行した後、自律行動型ロボット1は、報知対象者が睡眠中であるか否かを判断する(ステップS27)。報知対象者が睡眠中であるか否かは、状態情報取得部24において判断することができる。報知対象者が睡眠中であると判断した場合(ステップS27:YES)、自律行動型ロボット1は、報知動作として目覚まし動作を実行する(ステップS28)。目覚まし動作の実行は、報知動作実行部26において実行することができる。報知動作実行部26は、現在時刻が予め指定された実行時刻になったことを契機に目覚まし動作を実行する。すなわち、自律行動型ロボット1は、実行時刻より前に探索場所に着いたときは、実行時刻になるまで待機する。また、実行時刻に遅れて探索場所に着いたときは、自律行動型ロボット1は即座に目覚まし動作を実行する。目覚まし動作は、例えば、報知対象者の睡眠状態を状態情報取得部24において判断して、報知対象者が所定の睡眠状態である場合に実行するようにしてもよい。 After executing the process of step S26, the autonomous behavior robot 1 determines whether the notification target person is sleeping (step S27). Whether or not the notification target person is sleeping can be determined by the state information acquisition unit 24. When it is determined that the notification target person is sleeping (step S27: YES), the autonomous behavior robot 1 executes a wake-up operation as a notification operation (step S28). The execution of the wake-up operation can be executed by the notification operation execution unit 26. The notification operation execution unit 26 executes the wake-up operation when the current time reaches a predetermined execution time. That is, when the autonomous behavior robot 1 arrives at the search place before the execution time, it waits until the execution time comes. In addition, when the robot arrives at the search place after the execution time, the autonomous behavior robot 1 immediately executes the wake-up operation. The wake-up operation may be performed when the sleep state of the notification target person is determined by the state information acquisition unit 24 and the notification target person is in the predetermined sleep state, for example.
 ステップS28の処理を実行した後、自律行動型ロボット1は、再びステップS27の処理を実行し、報知対象者が睡眠中であるか否かを判断する。ステップS27の処理の再実行は、例えば5分、10分または15分等、一定の時間が経過してから実行されてもよい。 実 行 After executing the processing of step S28, the autonomous behavior robot 1 executes the processing of step S27 again to determine whether or not the notification target person is sleeping. The re-execution of the process in step S27 may be executed after a certain time elapses, for example, 5 minutes, 10 minutes, or 15 minutes.
 一方、報知対象者が睡眠中ではない(起床中である)と判断した場合(ステップS27:NO)、自律行動型ロボット1は、挨拶動作を実行して(ステップS29)、フローチャートで示す動作を終了する。挨拶動作とは、例えば、「おはようございます」等の定形文章、または現在時刻等の音声出力等である。挨拶動作を実施するか否かは、実行情報において予め設定できるようにしてもよい。 On the other hand, when it is determined that the notification target person is not sleeping (was awake) (step S27: NO), the autonomous behavior robot 1 performs a greeting operation (step S29), and performs the operation shown in the flowchart. finish. The greeting operation is, for example, a fixed phrase such as "Good morning" or an audio output such as the current time. Whether to perform the greeting operation may be set in advance in the execution information.
 なお、本実施形態で説明したロボット制御プログラムの動作(ロボット制御方法)における各ステップにおける処理は、実行順序を限定するものではない。 The processing in each step in the operation (robot control method) of the robot control program described in the present embodiment does not limit the execution order.
 次に、図6を用いて、実行情報を説明する。図6は、実施形態における実行情報の一例を示す図である。 Next, the execution information will be described with reference to FIG. FIG. 6 is a diagram illustrating an example of execution information according to the embodiment.
 図6において、実行情報1000は、「報知対象者」および「報知動作」のデータ項目を有する。実行情報1000は、利用者端末3において設定されて、実行情報取得部163において取得することができる。 実 行 In FIG. 6, execution information 1000 has data items of “notification target person” and “notification operation”. The execution information 1000 is set in the user terminal 3 and can be acquired by the execution information acquisition unit 163.
 「報知対象者」は、報知動作を行う報知対象者を特定するための情報である。図は、「報知対象者」のIDにおいて「A」、「B」および「C」の3名が指定されているが、報知対象者の人数は1以上の任意の数である。なお、それぞれの報知対象者を特定するための身体的特徴等は、自律行動型ロボット1に予め登録しておくようにしてもよい。 "Notification target person" is information for specifying a notification target person who performs a notification operation. In the figure, “A”, “B”, and “C” are specified in the ID of the “notification target”, but the number of notification targets is an arbitrary number of one or more. In addition, the physical characteristics for specifying each notification target person may be registered in the autonomous behavior robot 1 in advance.
 「報知動作」は、報知対象者に対して実行する報知動作の情報である。報知動作は1人の報知対象者に対して複数を設定できるようにすることができる。図は、それぞれの報知対象者に対して、「報知動作1」および「報知動作2」の2つの報知動作が設定されていることを示している。それぞれの報知対象者に対するそれぞれの報知動作には、優先順位を設定出来るようにしてもよい。 "Notification operation" is information on the notification operation to be performed on the notification target person. A plurality of notification operations can be set for one notification target person. The figure shows that two notification operations of “notification operation 1” and “notification operation 2” are set for each notification target person. A priority may be set for each notification operation for each notification target person.
 「報知動作」は、「探索場所」、「報知動作」および「時刻」のデータ項目を有する。「探索場所」は、報知対象者を探索する探索場所を示す情報である。図は、探索場所として、「子供部屋」、「寝室」、「洋室」または「リビング」等の部屋を指定する場合を示している。それぞれの部屋の情報は、可視化データ提供部161から利用者端末3に提供されて、自律行動型ロボット1と利用者端末3において共有しておくことができる。利用者は、利用者端末3に表示された地図から部屋を選択することにより探索場所を指定することができる。 “Notification operation” has data items of “search location”, “notification operation”, and “time”. “Search location” is information indicating a search location for searching for a notification target person. The figure shows a case where a room such as “child room”, “bedroom”, “western room”, or “living room” is designated as the search place. Information on each room is provided to the user terminal 3 from the visualization data providing unit 161 and can be shared between the autonomous robot 1 and the user terminal 3. The user can specify a search location by selecting a room from the map displayed on the user terminal 3.
 「報知動作」は、報知対象者に対して実行する報知動作を示す情報である。「報知動作」には、「目覚まし動作」、または「放送時刻報知」等の報知動作が設定される。それぞれの報知動作には、責任レベルを設定することができる。責任レベルとは、自律行動型ロボット1が報知対象者に対して実行する報知動作の重要度を示す情報である。自律行動型ロボット1は、設定された責任レベルに応じて報知動作を変更することができる。例えば、自律行動型ロボット1は、責任レベルに応じて、報知動作の実行順序(優先順位)、報知動作における音声出力の大きさ、音声の内容、報知動作の実行回数、または報知動作の終了条件等を変更してもよい。例えば、自律行動型ロボット1は、複数の報知動作を実行する場合、責任レベルの高い報知動作を優先して実行する。また、自律行動型ロボット1は、責任レベルの高い報知動作における音声出力を大きくし、または実行回数を増やすことにより、報知対象者に対して高い確率で報知動作を認識させることが可能となる。図は、責任レベルとして「高レベル」、「中レベル」及び「低レベル」の3段階のレベルを設定する場合を例示している。自律行動型ロボット1は、「高レベル」の報知動作を「中レベル」の報知動作に優先し、さらに「中レベル」の報知動作を「低レベル」の報知動作に優先して実行するようにしてもよい。 "Notification operation" is information indicating a notification operation to be performed on a notification target person. In the “notification operation”, a notification operation such as “alarm operation” or “broadcast time notification” is set. A responsibility level can be set for each notification operation. The responsibility level is information indicating the importance of the notification operation performed by the autonomous behavior robot 1 on the notification target person. The autonomous behavior robot 1 can change the notification operation according to the set responsibility level. For example, the autonomous behavior robot 1 performs, in accordance with the responsibility level, the execution order (priority) of the notification operation, the magnitude of the sound output in the notification operation, the content of the sound, the number of times the notification operation is executed, or the end condition of the notification operation. Etc. may be changed. For example, when performing a plurality of notification operations, the autonomous behavior robot 1 preferentially executes a notification operation having a high responsibility level. In addition, the autonomous behavior robot 1 can make the notification target person recognize the notification operation with a high probability by increasing the voice output or increasing the number of executions in the notification operation having a high responsibility level. The figure illustrates a case where three levels of “high level”, “medium level” and “low level” are set as the responsibility level. The autonomous behavior robot 1 executes the “high level” notification operation prior to the “medium level” notification operation, and further executes the “medium level” notification operation prior to the “low level” notification operation. You may.
 「目覚まし動作」は、「挨拶あり」または「挨拶なし」を設定することができる。図5においては、「挨拶あり」の場合の動作を説明した。「放送時刻報知」は、テレビ放送等の開始を報知する動作である。「外出時刻報知」は、報知対象者に対して外出時刻を報知する動作である。 “Alarm operation” can be set to “with greeting” or “without greeting”. In FIG. 5, the operation in the case of “with greeting” has been described. “Broadcast time notification” is an operation of notifying the start of a television broadcast or the like. “Outing time notification” is an operation of notifying the notification target person of the leaving time.
 「時刻」は報知動作を実行する時刻情報である。「時刻」には、1回のみ実行される報知動作に対する時刻を設定する。また、「時刻」には、毎日繰り返される報知動作に対する時刻を設定してもよい。 “Time” is time information for executing the notification operation. In the “time”, a time for the notification operation executed only once is set. In addition, the time for the notification operation that is repeated every day may be set in “time”.
 なお、図は報知対象3名に対してそれぞれ2つの報知動作を1つの実行情報として示したが、例えば、実行情報取得部163は、それぞれの実行情報を個別に取得して、実行情報をマージすることにより、複数の実行情報における報知動作の優先順位、実行時間の干渉による実行の可否などを考慮して報知動作をスケジュールするようにしてもよい。 Although the figure shows two notification operations as one piece of execution information for each of three notification targets, for example, the execution information acquisition unit 163 acquires each piece of execution information individually and merges the pieces of execution information. By doing so, the notification operation may be scheduled in consideration of the priority order of the notification operations in a plurality of pieces of execution information, the possibility of execution due to interference in execution time, and the like.
 次に、図7を用いて、実行情報の設定方法を説明する。図7は、実施形態における実行情報の設定方法の一例を示す図である。 Next, a method for setting execution information will be described with reference to FIG. FIG. 7 is a diagram illustrating an example of a method of setting execution information according to the embodiment.
 図7において、利用者端末3の表示画面には、実行情報設定画面30が表示されている。実行情報設定画面30は、報知対象者設定部311、時刻設定部312、報知動作設定部313、および探索場所設定部32を含む。実行情報設定画面30は、例えば、利用者端末3のアプリケーションプログラム(アプリ)において表示される。 に お い て In FIG. 7, an execution information setting screen 30 is displayed on the display screen of the user terminal 3. The execution information setting screen 30 includes a notification target person setting unit 311, a time setting unit 312, a notification operation setting unit 313, and a search place setting unit 32. The execution information setting screen 30 is displayed in, for example, an application program (app) of the user terminal 3.
 報知対象者設定部311は、報知対象者を選択するためのプルダウンメニューである。図は、報知対象者として報知対象者Aが選択されていることを示している。時刻設定部312は、報知動作を実行する時刻情報を選択するためのプルダウンメニューである。報知動作設定部313は、報知動作を選択するためのプルダウンメニューである。図では、報知動作として目覚まし動作が選択されていることを示している。 The notification target person setting unit 311 is a pull-down menu for selecting a notification target person. The figure shows that the notification target person A is selected as the notification target person. The time setting unit 312 is a pull-down menu for selecting time information for executing the notification operation. The notification operation setting section 313 is a pull-down menu for selecting a notification operation. The figure shows that the wake-up operation is selected as the notification operation.
 探索場所設定部32は、例えば、可視化データ提供部161から提供された可視化データに基づき自宅の部屋の配置の平面図を表示して、報知対象者を探索する探索場所を平面図から設定するための表示である。図は、探索場所設定部32に、ロボット2が充電のために戻るホームポジション321、子供部屋322、寝室323および洋室324が表示されていることを示している。利用者は、探索場所設定部32において、子供部屋322、寝室323または洋室324の少なくともいずれか1つの部屋をタッチすることにより選択場所を設定する。 The search place setting unit 32 displays, for example, a plan view of the arrangement of the rooms at home based on the visualization data provided from the visualization data providing unit 161 and sets a search place to search for a notification target person from the plan view. Is displayed. The figure shows that the search position setting unit 32 displays a home position 321, a child room 322, a bedroom 323, and a Western room 324 to which the robot 2 returns for charging. The user sets the selected place by touching at least one of the children's room 322, the bedroom 323, and the Western room 324 in the search place setting unit 32.
 例えば、利用者が子供部屋322を探索場所として設定した場合、探索場所設定部32は移動経路325を表示する。移動経路325は、例えば、探索部25において算出されて、利用者端末3に提供される。図は、ホームポジション321から子供部屋322までの移動経路325が破線表示されていることを示している。なお、探索場所設定部32は、移動経路において移動が制限される場合、移動に問題があることを表示するようにしてもよい。例えば、子供部屋322の入口に段差があってロボット2が移動できない場合、その旨を移動経路325にX印で表示するようにしてもよい。 For example, when the user sets the child room 322 as a search place, the search place setting unit 32 displays the moving route 325. The movement route 325 is calculated by the search unit 25 and provided to the user terminal 3, for example. The figure shows that the moving route 325 from the home position 321 to the child room 322 is indicated by a broken line. In addition, the search location setting unit 32 may display that there is a problem with the movement when the movement is restricted on the movement route. For example, when there is a step at the entrance of the child room 322 and the robot 2 cannot move, the fact may be displayed on the movement route 325 with an X mark.
 なお、本実施形態で説明した装置を構成する機能を実現するためのプログラムを、コンピュータ読み取り可能な記録媒体に記録して、当該記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより、本実施形態の上述した種々の処理を行ってもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものであってもよい。また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、フラッシュメモリ等の書き込み可能な不揮発性メモリ、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。 Note that a program for realizing the functions constituting the apparatus described in the present embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read and executed by a computer system. Thus, the various processes described above in the present embodiment may be performed. Here, the “computer system” may include an OS and hardware such as peripheral devices. The “computer system” also includes a homepage providing environment (or a display environment) if a WWW system is used. The “computer-readable recording medium” includes a writable nonvolatile memory such as a flexible disk, a magneto-optical disk, a ROM, and a flash memory, a portable medium such as a CD-ROM, and a hard disk incorporated in a computer system. Storage device.
 さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(例えばDRAM(Dynamic Random Access Memory))のように、一定時間プログラムを保持しているものも含むものとする。また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。また、上記プログラムは、前述した機能の一部を実現するためのものであっても良い。さらに、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組合せで実現するもの、いわゆる差分ファイル(差分プログラム)であっても良い。 Further, a “computer-readable recording medium” refers to a volatile memory (for example, a DRAM (Dynamic)) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random (Access @ Memory)), which includes a program that is held for a certain period of time. Further, the above program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the "transmission medium" for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. Further, the program may be for realizing a part of the functions described above. Furthermore, what realizes the above-described functions in combination with a program already recorded in the computer system, a so-called difference file (difference program) may be used.
 以上、本発明の実施形態について、図面を参照して説明してきたが、具体的な構成はこの実施形態に限定されるものではなく、本発明の趣旨を逸脱しない範囲においての種々の変更(変形例)も含まれる。 The embodiment of the present invention has been described with reference to the drawings. However, the specific configuration is not limited to this embodiment, and various modifications (modifications) may be made without departing from the spirit of the present invention. Example) is also included.
 こうした変形例として、例えば、利用者端末3が存在する場所を探索場所として設定し、または探索場所として探索動作を実行するようにしてもよい。利用者端末3が存在する場所は、例えば、図示しない無線LAN通信の基地局と利用者端末3の位置関係によって把握することができる。また、利用者端末3が存在する場所は、ロボット2と利用者端末3の間の近距離無線通信の電波強度において把握することができる。利用者端末3の存在場所に基づき、報知対象者を探索することにより、利用者は利用者端末を探索場所に持ち込むだけで探索場所を設定することができるため、報知動作の設定を簡単にすることが可能となる。 As a modified example, for example, a location where the user terminal 3 exists may be set as a search location, or a search operation may be performed as the search location. The location where the user terminal 3 is located can be grasped by, for example, the positional relationship between a base station for wireless LAN communication (not shown) and the user terminal 3. Further, the location where the user terminal 3 exists can be grasped by the radio field intensity of the short-range wireless communication between the robot 2 and the user terminal 3. By searching for a notification target person based on the location of the user terminal 3, the user can set the search location simply by bringing the user terminal to the search location, thereby simplifying the setting of the notification operation. It becomes possible.
 また、利用者端末3のアプリは、報知動作を実行する所定の時刻においてロボット2を呼び寄せ、報知動作実行部26に報知動作を実行させるようにしてもよい。例えば、利用者端末3のアプリは、利用者が利用者端末3の目覚まし機能においてタイマを設定したときに、タイマの設定時刻にロボット2を呼び寄せて報知動作を実行させるようにしてもよい。すなわち、利用者端末3のアプリは、利用者端末3における一般的な目覚まし機能とロボット2とを連携させることができる。例えば、目覚まし機能には、いったんアラームを止めても、所定の時間経過後に所定の繰り返し回数において再びアラームが鳴り出す「スヌーズ」機能がある。利用者端末3のアプリは、スヌーズ機能によりアラームが鳴り出したことを契機にロボット2を呼び寄せて報知動作を実行させてもよい。また、利用者端末3のアプリは、スヌーズの繰り返し回数が終了したことを契機にロボット2を呼び寄せて報知動作を実行させてもよい。 Also, the application of the user terminal 3 may call the robot 2 at a predetermined time at which the notification operation is performed, and cause the notification operation execution unit 26 to execute the notification operation. For example, when the user sets a timer in the wake-up function of the user terminal 3, the application of the user terminal 3 may call the robot 2 at the set time of the timer to execute the notification operation. That is, the application of the user terminal 3 can link the general alarm function in the user terminal 3 with the robot 2. For example, the wake-up function includes a "snooze" function in which, once the alarm is stopped, the alarm sounds again at a predetermined number of repetitions after a predetermined time has elapsed. The application of the user terminal 3 may call the robot 2 when the alarm is started by the snooze function and execute the notification operation. Further, the application of the user terminal 3 may call the robot 2 and execute the notification operation when the repetition number of the snooze ends.
 また、実行情報の設定は、マーカの設置によって行うようにしてもよい。例えば、部屋の入口に探索場所または報知対象者を示すマーカを設置することにより、マーカが設置された部屋を探索場所として設定し、または、報知対象者を設定できるようにしてもよい。例えば、報知動作を希望する者は、部屋のドアノブに報知動作を希望する旨のマーカを設定することにより、ロボット2がマーカを認識し、ロボット2による報知動作を受けるようにしてもよい。また、報知動作を希望しない者は、部屋のドアノブに報知動作を希望しない旨のマーカ(「Don't Disturb」表示等)を設定することにより、ロボット2による報知動作を忌避するようにしてもよい。 The setting of the execution information may be performed by setting a marker. For example, a marker indicating a search place or a person to be notified may be set at the entrance of the room, so that the room where the marker is installed may be set as the search place, or the person to be notified may be set. For example, the person who wants the notification operation may set the marker indicating that the notification operation is desired on the door knob of the room, so that the robot 2 recognizes the marker and receives the notification operation by the robot 2. In addition, a person who does not desire the notification operation may avoid the notification operation by the robot 2 by setting a marker (such as “Don't Disturb” display) indicating that the notification operation is not desired on the door knob of the room. Good.
 また、実行情報の設定は、実施された報知動作によって行うようにしてもよい。例えば、報知対象者が報知動作を実行したロボット2に対して、「明日も同じ時間におこして」等と伝えることにより実行情報が設定されてもよい。実施された報知動作によって実行情報を設定することにより、実行情報の設定を容易にすることができる。 (4) The setting of the execution information may be performed by the implemented notification operation. For example, the execution information may be set by notifying the robot 2 that the notification target person has executed the notification operation, such as “wake up at the same time tomorrow”. By setting the execution information by the performed notification operation, the setting of the execution information can be facilitated.
 また、ロボット2は、報知対象者から不快行為をされたことをセンサなどを介して検出した場合、次回の報知動作の実行を嫌がるようなモーションを実行してもよい。不快行為とは、快行為とは異なる行為であり、例えば、ロボット2を叩く、ロボット2を叱る等の行為である。報知動作の実行を嫌がるモーションを実行するロボット2に対して報知対象者が謝罪する等の行為を行ったことをロボット2がセンサなどを介して検出した場合、ロボット2は、機嫌を直して報知動作の実行を引き受けるようなモーションを実行してもよい。このように、不快行為に対する反応を行うことにより、ロボット2とのコミュニケーションを深めることが可能となる。 In addition, when the robot 2 detects that the notification target person has performed an unpleasant act via a sensor or the like, the robot 2 may execute a motion that dislikes the execution of the next notification operation. An unpleasant act is an act different from a pleasant act, for example, an act of hitting the robot 2 or scolding the robot 2. When the robot 2 detects through a sensor or the like that the notification target person has performed an action such as apologizing to the robot 2 performing a motion that dislikes performing the notification operation, the robot 2 corrects the mood and notifies the robot 2 A motion that undertakes the execution of the operation may be executed. As described above, by reacting to the uncomfortable act, it is possible to deepen the communication with the robot 2.
 また、報知対象者が居る部屋のドアが閉まっていることで、ロボット2が部屋の内側に移動出来ない場合が想定される。この場合、ロボット2は、ドアの手前で停止し、実行時間になったときに報知動作を実行してもよい。報知動作実行部26は、ドアが閉じている場合に限らず、障害物や、進入禁止エリアなどが原因で、探索場所に到達できない場合、その場にとどまり、実行時間になったときに報知動作を実行してもよい。報知動作実行部26は、探索場所に到達出来たときに発する音量より大きな音量で報知動作を実行してもよい。つまり、報知動作実行部26は、探索場所として指定された場所から離れていることを考慮し、報知対象者が気付くように、通常より大きな音量で報知行動を行う。報知行動としては、単に音を発生するだけで無く、ドアや壁にロボット2の身体を軽くぶつけることで音を発生させてもよい。また、ロボット2は、ドアが開くことより、ドアや人に衝突しない位置で報知動作を行ってもよい。そのため、報知動作実行部26は、ドアの開閉範囲を認識する。そして、報知動作実行部26は、その範囲の外側にロボット2を移動させ、ドアの開閉範囲外に位置しているときに報知動作を実行する。これにより、報知対象者が報知に気付き、ドアを開けて出てきたときに、ロボット2とドアの接触を防ぐことができる。また、探索部25は、報知動作のために移動しているときに限り、マーカで指定された進入禁止のルールを破り、進入禁止範囲を通過して、報知対象者の居る探索場所に移動してもよい。 Further, it is assumed that the robot 2 cannot move to the inside of the room because the door of the room where the notification target person is located is closed. In this case, the robot 2 may stop in front of the door and execute the notification operation when the execution time comes. The notifying operation execution unit 26 is not limited to the case where the door is closed, and stays there when it is not possible to reach the search location due to an obstacle or a no-entry area. May be executed. The notification operation execution unit 26 may execute the notification operation at a volume higher than the volume emitted when the search location can be reached. That is, the notification operation execution unit 26 performs the notification operation at a volume higher than usual so that the notification target person notices, considering that the user is away from the location specified as the search location. As the notification action, not only the sound is generated but also the sound may be generated by lightly hitting the body of the robot 2 against a door or a wall. In addition, the robot 2 may perform the notification operation at a position that does not collide with a door or a person by opening the door. Therefore, the notification operation execution unit 26 recognizes the open / close range of the door. Then, the notification operation execution unit 26 moves the robot 2 outside the range, and executes the notification operation when the robot 2 is located outside the opening / closing range of the door. Thereby, when the notification target person notices the notification and opens the door, the contact between the robot 2 and the door can be prevented. Also, the search unit 25 breaks the entry prohibition rule specified by the marker, moves through the entry prohibition range, and moves to the search place where the notification target person is located only when moving for the notification operation. You may.
<音声入力による実行情報の取得>
 自律行動型ロボット1は、実行情報を音声入力によって取得してもよい。音声入力によって実行情報を取得する処理の概要について説明する。
<Acquisition of execution information by voice input>
The autonomous behavior robot 1 may acquire the execution information by voice input. An outline of a process of acquiring execution information by voice input will be described.
 図8は、音声入力による実行情報取得に係る自律行動型ロボット1のモジュール構成例を示す図である。自律行動型ロボット1は、マイク108、音声認識部201、実行情報取得部163及び実行情報記憶部204を有する。自律行動型ロボット1は、上述したようにロボット2とデータ提供装置10の2つの装置によるシステムとして構成される。図8は、「音声入力による実行情報の取得」に関わる機能を抽出して示すものである。また、マイク108、音声認識部201、実行情報記憶部204および実行情報取得部163それぞれの機能がロボット2,データ提供装置10、あるいは、他の装置のいずれによって実現されるかは自律行動型ロボット1の仕様に基づいて任意に設計されればよい。 FIG. 8 is a diagram showing an example of a module configuration of the autonomous behavior robot 1 for acquiring execution information by voice input. The autonomous behavior robot 1 includes a microphone 108, a voice recognition unit 201, an execution information acquisition unit 163, and an execution information storage unit 204. The autonomous behavior robot 1 is configured as a system including the robot 2 and the data providing device 10 as described above. FIG. 8 shows extracted functions related to “acquisition of execution information by voice input”. It is determined whether the functions of the microphone 108, the voice recognition unit 201, the execution information storage unit 204, and the execution information acquisition unit 163 are realized by the robot 2, the data providing device 10, or another device. It may be arbitrarily designed based on the first specification.
 マイク108は、利用者の音声を入力する。マイク108は、ロボット2とデータ提供装置10のいずれに設けられていてもよい。また、マイク108は、自律行動型ロボット1の外に設けられていてもよい。例えばロボット2が行動する屋内に設置されているマイクが用いられてもよい。あるいは、利用者端末3のマイクが用いられてもよい。実行情報を音声で入力する場合、利用者はマイクに向かって実行情報を特定するための句や文などの言語表現を音声で発する。この例では、妻である利用者が夫である利用者を起床させるために、「7時に寝床のお父さんを起こしなさい。」という命令文を音声で発したものと想定する。音声認識部201は、マイク108で入力した音声データを認識して、音声データを言語データに変換する。 The microphone 108 inputs the voice of the user. The microphone 108 may be provided in either the robot 2 or the data providing device 10. Moreover, the microphone 108 may be provided outside the autonomous behavior robot 1. For example, a microphone installed indoors where the robot 2 acts may be used. Alternatively, the microphone of the user terminal 3 may be used. When the execution information is input by voice, the user utters a linguistic expression such as a phrase or a sentence for specifying the execution information toward the microphone. In this example, it is assumed that the user who is a wife utters a command "Wake up the dad at bed time at 7 o'clock" in order to wake up the user who is a husband. The voice recognition unit 201 recognizes voice data input by the microphone 108 and converts the voice data into language data.
 実行情報取得部163は、実行情報特定部202及び変換ルール記憶部203を有する。実行情報特定部202は、変換ルール記憶部203に記憶されている変換ルールを参照して、言語データに基づく実行情報を特定する。変換ルールは、実行情報のパラメータと言語表現を対応付けている。変換ルール記憶部203は、具体的には報知動作の種類に対応する動作表現、報知対象者IDに対応する人物表現、及び子供部屋や寝室などの屋内の場所に対応する場所表現を記憶している。例えば報知動作「目覚まし/挨拶あり」に「起こしなさい」という動作表現が対応付けられ、報知対象者ID「A」に「お父さん」という人物表現が対応付けられ、場所「寝室」に「寝床」という場所表現が対応付けられている。変換ルールは、時刻に対応する時間表現を対応付けてもよい。例えば時刻「12時」に「お昼」という時間表現が対応付けられてもよい。 The execution information acquisition unit 163 has an execution information identification unit 202 and a conversion rule storage unit 203. The execution information specifying unit 202 specifies the execution information based on the language data by referring to the conversion rules stored in the conversion rule storage unit 203. The conversion rule associates a parameter of execution information with a language expression. The conversion rule storage unit 203 specifically stores an operation expression corresponding to the type of notification operation, a person expression corresponding to the notification target person ID, and a location expression corresponding to an indoor place such as a child room or a bedroom. I have. For example, the motion expression “wake up” is associated with the notification operation “alarm / greeting”, the person expression “dad” is associated with the notification target ID “A”, and the place “bedroom” is “bed”. A location expression is associated. The conversion rule may associate a time expression corresponding to the time. For example, a time expression “lunch” may be associated with time “12:00”.
 実行情報特定部202は、言語データの中に、報知動作の種類に対応する動作表現が含まれている場合に、その動作表現に対応する報知動作の種類を特定する。また、実行情報特定部202は、言語データの中に、報知対象者IDに対応する人物表現が含まれている場合に、その人物表現に対応する報知対象者IDを特定する。さらに、実行情報特定部202は、言語データの中に、屋内の場所に対応する場所表現が含まれている場合に、その場所表現に対応する屋内の場所を特定する。特定された屋内の場所は、探索場所に相当する。実行情報特定部202は、言語データの中に、例えば「7時」のような時刻表現が含まれている場合にはその時刻を特定し、時刻に対応する時間表現が含まれている場合には、その時間表現に対応する時刻を特定する。特定した時刻は、報知時刻に相当する。実行情報特定部202は、報知対象者ID、探索場所、報知動作の種類及び報知時刻を含む実行情報を特定する。この例では、報知対象者ID「A」、探索場所「寝室」、報知動作の種類「目覚まし/挨拶あり」及び報知時刻「7時」を含む実行情報が特定される。実行情報記憶部204は、特定された実行情報を記憶する。 When the language data includes an action expression corresponding to the type of the notification operation, the execution information specifying unit 202 specifies the type of the notification operation corresponding to the operation expression. Further, when the language data includes a person expression corresponding to the notification target person ID, the execution information specifying unit 202 specifies the notification target person ID corresponding to the person expression. Furthermore, when the language data includes a place expression corresponding to an indoor place, the execution information specifying unit 202 specifies an indoor place corresponding to the place expression. The specified indoor place corresponds to a search place. The execution information specifying unit 202 specifies the time when the language data includes a time expression such as “7 o'clock”, and specifies the time when the time expression corresponding to the time is included. Specifies the time corresponding to the time expression. The specified time corresponds to the notification time. The execution information specifying unit 202 specifies execution information including a notification target person ID, a search place, a notification operation type, and a notification time. In this example, the execution information including the notification target person ID “A”, the search place “bedroom”, the notification operation type “alarm / greeting”, and the notification time “7:00” is specified. The execution information storage unit 204 stores the specified execution information.
 以下に、自律行動型ロボット1が音声入力によって実行情報を取得する実装例を8つ示す。第1実装例では、データ提供装置10において図8に示したすべてのモジュールの機能を実現する。つまり、データ提供装置10に設けられたマイク108において利用者の音声を入力し、データ提供装置10において音声認識の処理と実行情報特定の処理を行なう。 実 装 Hereinafter, eight implementation examples in which the autonomous behavior robot 1 acquires execution information by voice input will be described. In the first implementation example, the functions of all the modules shown in FIG. That is, the user's voice is input through the microphone 108 provided in the data providing apparatus 10, and the data providing apparatus 10 performs a process of voice recognition and a process of specifying execution information.
 データ提供装置10は、マイク108、音声認識部201、実行情報特定部202、変換ルール記憶部203及び実行情報記憶部204を有する。データ提供装置10の音声認識部201は、データ提供装置10のマイク108から音声データを受け取り、言語データに変換する。データ提供装置10の実行情報特定部202は、データ提供装置10の変換ルール記憶部203を参照して、言語データに基づく実行情報を特定する。データ提供装置10の実行情報記憶部204は、特定された実行情報を記憶する。 The data providing apparatus 10 includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information storage unit 204. The voice recognition unit 201 of the data providing device 10 receives voice data from the microphone 108 of the data providing device 10 and converts the voice data into language data. The execution information specifying unit 202 of the data providing apparatus 10 specifies execution information based on language data with reference to the conversion rule storage unit 203 of the data providing apparatus 10. The execution information storage unit 204 of the data providing device 10 stores the specified execution information.
 第1実装例において特定された実行情報に場所情報が含まれない場合に、実行情報特定部202は、音声を入力したマイク108の所在場所、つまりデータ提供装置10が設置されている場所を、実行情報における場所情報としてもよい。 When the location information is not included in the execution information specified in the first implementation example, the execution information specifying unit 202 determines the location of the microphone 108 that has input the voice, that is, the location where the data providing apparatus 10 is installed. The location information in the execution information may be used.
 第2実装例では、ロボット2が行動する屋内に取り付けられている取付型デバイス4とデータ提供装置10によって、図8に示したモジュールの機能を実現する。つまり、取付型デバイス4に含まれるマイク108において利用者の音声を入力し、データ提供装置10において音声認識の処理と実行情報特定の処理を行なう。そのために、取付型デバイス4からデータ提供装置10へ音声データが伝送される。1以上の取付型デバイス4はIDにより識別され、データ提供装置10は各取付型デバイス4の設置場所(屋内における位置座標)をあらかじめ登録しておく。 In the second implementation example, the function of the module shown in FIG. 8 is realized by the attachment type device 4 and the data providing device 10 attached indoors where the robot 2 acts. That is, the user's voice is input through the microphone 108 included in the attached device 4, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information. For this purpose, audio data is transmitted from the attached device 4 to the data providing device 10. The one or more attachable devices 4 are identified by IDs, and the data providing apparatus 10 pre-registers the installation location (indoor position coordinates) of each attachable device 4.
 取付型デバイス4は、異なる場所に複数取り付けられてもよい。取付型デバイス4は、有線通信と無線通信の何れか一方又は両方によってデータ提供装置10との間でデータを伝送できる。無線通信の方式は、例えば、無線LAN、Bluetooth(登録商標)、または赤外線通信等の近距離無線通信、もしくは有線通信等でもよい。有線通信の方式は、たとえば有線LANでもよい。 A plurality of attachment type devices 4 may be attached to different places. The attached device 4 can transmit data to and from the data providing device 10 by one or both of wired communication and wireless communication. The method of wireless communication may be, for example, short-range wireless communication such as wireless LAN, Bluetooth (registered trademark), or infrared communication, or wired communication. The wired communication system may be, for example, a wired LAN.
 取付型デバイス4は、マイク108及び音声データ送信部(不図示)を有する。音声データ送信部は、取付型デバイス4のマイク108で入力した音声データをデータ提供装置10へ送信する。データ提供装置10は、音声データ受信部(不図示)、音声認識部201、実行情報特定部202、変換ルール記憶部203及び実行情報記憶部204を有する。音声データ受信部は、取付型デバイス4から送られた音声データを受信する。データ提供装置10の音声認識部201は、音声データ受信部が受信した音声データを言語データに変換する。データ提供装置10の実行情報特定部202は、変換ルール記憶部203を参照して、言語データに基づく実行情報を特定する。データ提供装置10の204は、特定された実行情報を記憶する。 The attachment type device 4 has a microphone 108 and an audio data transmission unit (not shown). The audio data transmission unit transmits the audio data input by the microphone 108 of the attachment type device 4 to the data providing device 10. The data providing device 10 includes a voice data receiving unit (not shown), a voice recognition unit 201, an execution information specifying unit 202, a conversion rule storage unit 203, and an execution information storage unit 204. The audio data receiving unit receives the audio data sent from the attached device 4. The voice recognition unit 201 of the data providing device 10 converts the voice data received by the voice data receiving unit into language data. The execution information specifying unit 202 of the data providing device 10 specifies the execution information based on the language data with reference to the conversion rule storage unit 203. 204 of the data providing apparatus 10 stores the specified execution information.
 第2実装例において特定された実行情報に場所情報が含まれない場合に、実行情報取得部163は、音声を入力したマイク108の所在場所、つまり取付型デバイス4が取り付けられている場所を、実行情報における場所情報としてもよい。例えば寝室に取り付けられたマイク108で「30分後に起こしてください。」という音声を入力した場合に、寝室を実行情報における場所情報としてもよい。尚、この例文は、利用者が単独であって報知対象者IDを省略できる場合を想定している。また、実行情報特定部202は、「30分後」という時間表現に基づいて現在時刻に30分を加えて報知する時間を特定する。更に、実行情報特定部202は、動作表現「起こしてください」によって報知動作の種類「目覚まし/挨拶なし」を特定するものとする。 When the location information is not included in the execution information specified in the second implementation example, the execution information acquisition unit 163 determines the location of the microphone 108 that has input the voice, that is, the location where the attachable device 4 is attached. The location information in the execution information may be used. For example, when a voice saying “Please wake up after 30 minutes” is input from the microphone 108 attached to the bedroom, the bedroom may be used as the location information in the execution information. This example sentence assumes that the user is alone and the notification target person ID can be omitted. Further, the execution information specifying unit 202 specifies the time to be notified by adding 30 minutes to the current time based on the time expression “after 30 minutes”. Further, it is assumed that the execution information specifying unit 202 specifies the type of the notification operation “no alarm / no greeting” by the operation expression “Please wake up”.
 図9は、取付型デバイス4の取付場所の特定に係るデータ提供装置10のモジュール構成例を示す図である。いいかえれば、図9は第2実装例に対応したデータ提供装置10の、特に、音声を拾った取付型デバイス4(マイク108)の場所特定機能に関するモジュール構成例を示す。データ提供装置10の実行情報取得部163は、場所特定部211及び取付場所記憶部212を有する。取付場所記憶部212は、屋内の所定場所に取り付けられている各取付型デバイス4の取付場所を記憶している。つまり、取付場所記憶部212は、各取付型デバイス4のIDに対応付けて取付場所を記憶している。第2実装例において、実行情報特定部202で実行情報が特定された後に、場所特定部211は、実行情報に場所情報が含まれているか否かを判定する。実行情報に場所情報が含まれている場合には、場所特定部211はそのまま処理を終える。実行情報に場所情報が含まれていない場合には、場所特定部211は音声データ受信部から音声データの送信元である取付型デバイス4のIDを得る。そして、場所特定部211は、取付場所記憶部212を参照して、送信元の取付型デバイス4のIDに対応する取付場所を特定し、特定した取付場所を探索場所として実行情報記憶部204に書き込む。つまり、実行情報における場所情報として取付型デバイス4の取付場所が記憶される。 FIG. 9 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 relating to the specification of the mounting location of the mounting type device 4. In other words, FIG. 9 shows a module configuration example of the data providing apparatus 10 corresponding to the second implementation example, particularly, the location specifying function of the attached device 4 (microphone 108) that picked up the voice. The execution information acquisition unit 163 of the data providing device 10 includes a location identification unit 211 and an attachment location storage unit 212. The attachment location storage unit 212 stores the attachment location of each attachment type device 4 attached to a predetermined indoor location. That is, the attachment location storage unit 212 stores the attachment location in association with the ID of each attachment type device 4. In the second implementation example, after the execution information is specified by the execution information specifying unit 202, the location specifying unit 211 determines whether or not the execution information includes the location information. When the location information is included in the execution information, the location specifying unit 211 ends the processing as it is. If the location information is not included in the execution information, the location specifying unit 211 obtains the ID of the attached device 4 that is the transmission source of the audio data from the audio data receiving unit. Then, the location specifying unit 211 refers to the mounting location storage unit 212 to specify a mounting location corresponding to the ID of the source mountable device 4, and stores the specified mounting location as a search location in the execution information storage unit 204. Write. That is, the attachment location of the attachment type device 4 is stored as the location information in the execution information.
 第3実装例では、利用者端末3とデータ提供装置10によって、図8に示したモジュールの機能を実現する。ここでいう利用者端末3は、スマートフォンやラップトップPCなどの任意のコンピュータ端末であればよい。第3実装例においては利用者端末3が内蔵するマイク108を活用する。利用者端末3のマイク108において利用者の音声を入力し、データ提供装置10において音声認識の処理と実行情報特定の処理を行なう。そのために、利用者端末3からデータ提供装置10へ音声データが伝送される。 In the third implementation example, the functions of the module shown in FIG. 8 are realized by the user terminal 3 and the data providing device 10. The user terminal 3 here may be any computer terminal such as a smartphone or a laptop PC. In the third implementation example, the microphone 108 built in the user terminal 3 is used. The user's voice is input through the microphone 108 of the user terminal 3, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information. For that purpose, voice data is transmitted from the user terminal 3 to the data providing device 10.
 第4実装例における利用者端末3は、マイク108及び音声データ送信部(不図示)を有する。音声データ送信部は、利用者端末3のマイク108で入力した音声データをデータ提供装置10へ送信する。データ提供装置10のモジュール構成については、第2実装例の場合と同様である。すなわち、第3実装例においては、固定型かつ専用品としての取付型デバイス4ではなく、移動可能な汎用品としての利用者端末3を利用する点において第2実装例と相違する。 The user terminal 3 in the fourth implementation example has the microphone 108 and a voice data transmission unit (not shown). The audio data transmission unit transmits the audio data input by the microphone 108 of the user terminal 3 to the data providing device 10. The module configuration of the data providing device 10 is the same as in the second implementation example. That is, the third implementation example is different from the second implementation example in that the user terminal 3 is used as a movable general-purpose product, instead of the fixed type and the attached device 4 as a dedicated product.
 第4実装例も、利用者端末3とデータ提供装置10によって、図8に示したモジュールの機能を実現する。利用者端末3において利用者の音声を入力するとともに、音声認識の処理を行なう。データ提供装置10は、実行情報特定の処理を行なう。そのために、利用者端末3からデータ提供装置10へ言語データが伝送される。第4実装例においては、利用者端末3において音声取得だけでなく音声認識も行う点において第3実装例と相違する。 In the fourth implementation example, the user terminal 3 and the data providing device 10 also realize the functions of the module shown in FIG. A user's voice is input to the user terminal 3 and a voice recognition process is performed. The data providing device 10 performs a process of specifying execution information. For this purpose, language data is transmitted from the user terminal 3 to the data providing device 10. The fourth implementation example differs from the third implementation example in that the user terminal 3 performs not only speech acquisition but also speech recognition.
 第4実装例における利用者端末3は、マイク108、音声認識部201及び言語データ送信部(不図示)を有する。利用者端末3の音声認識部201は、利用者端末3のマイク108で入力した音声データを言語データに変換する。言語データ送信部は、変換された言語データをデータ提供装置10へ送信する。データ提供装置10は、言語データ受信部(不図示)、実行情報特定部202、変換ルール記憶部203及び実行情報記憶部204を有する。言語データ受信部は、利用者端末3から送られた言語データを受信する。データ提供装置10の実行情報特定部202は、データ提供装置10の変換ルール記憶部203を参照して、受信した言語データに基づく実行情報を特定する。データ提供装置10の実行情報記憶部204は、特定された実行情報を記憶する。 The user terminal 3 in the fourth implementation example has the microphone 108, the voice recognition unit 201, and the language data transmission unit (not shown). The voice recognition unit 201 of the user terminal 3 converts voice data input by the microphone 108 of the user terminal 3 into language data. The language data transmitting unit transmits the converted language data to the data providing device 10. The data providing device 10 includes a language data receiving unit (not shown), an execution information specifying unit 202, a conversion rule storage unit 203, and an execution information storage unit 204. The language data receiving unit receives the language data sent from the user terminal 3. The execution information specifying unit 202 of the data providing device 10 refers to the conversion rule storage unit 203 of the data providing device 10 and specifies execution information based on the received language data. The execution information storage unit 204 of the data providing device 10 stores the specified execution information.
 第5実装例も、利用者端末3とデータ提供装置10によって、図8に示したモジュールの機能を実現する。利用者端末3において利用者の音声を入力するとともに、音声認識の処理と実行情報特定の処理を行なう。そして、利用者端末3からデータ提供装置10へ実行情報が伝送される。第5実装例においては、利用者端末3において、音声取得、音声認識に加えて、実行情報の特定も行う点において第4実装例と相違する。 In the fifth implementation example, the user terminal 3 and the data providing device 10 also realize the functions of the module shown in FIG. A user's voice is input to the user terminal 3, and a process of voice recognition and a process of specifying execution information are performed. Then, the execution information is transmitted from the user terminal 3 to the data providing device 10. The fifth implementation example is different from the fourth implementation example in that the user terminal 3 also specifies execution information in addition to voice acquisition and voice recognition.
 第5実装例における利用者端末3は、マイク108、音声認識部201、実行情報特定部202、変換ルール記憶部203及び実行情報送信部(不図示)を有する。利用者端末3の音声認識部201は、利用者端末3のマイク108で入力した音声データを言語データに変換する。実行情報特定部202は、変換ルール記憶部203を参照して、変換された言語データに基づく実行情報を特定する。実行情報送信部は、特定された実行情報をデータ提供装置10へ送信する。データ提供装置10は、実行情報受信部(不図示)と実行情報記憶部204を有する。実行情報受信部は、利用者端末3から送られた実行情報を受信する。データ提供装置10の実行情報記憶部204は、受信した実行情報を記憶する。 The user terminal 3 in the fifth implementation example includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information transmission unit (not shown). The voice recognition unit 201 of the user terminal 3 converts voice data input by the microphone 108 of the user terminal 3 into language data. The execution information specifying unit 202 refers to the conversion rule storage unit 203 and specifies execution information based on the converted language data. The execution information transmitting unit transmits the specified execution information to the data providing device 10. The data providing device 10 includes an execution information receiving unit (not shown) and an execution information storage unit 204. The execution information receiving unit receives the execution information sent from the user terminal 3. The execution information storage unit 204 of the data providing device 10 stores the received execution information.
 第3実装例から第5実装例までにおいて、音声入力によって取得した実行情報に場所情報が含まれない場合に、実行情報取得部163は、音声を入力したマイク108の所在場所、つまり利用者端末3の所在場所を実行情報における場所情報としてもよい。上述したように、第3実装例から第5実装例は、利用者端末3において取得された音声データを起点として処理を開始する点において共通する。 In the third to fifth implementation examples, when the execution information acquired by voice input does not include the location information, the execution information acquisition unit 163 determines the location of the microphone 108 that has input the voice, that is, the user terminal. The location 3 may be used as the location information in the execution information. As described above, the third to fifth implementation examples are common in that the processing is started with the voice data acquired by the user terminal 3 as a starting point.
 図10は、利用者端末3の所在場所の特定に係るデータ提供装置10のモジュール構成例を示す図である。いいかえれば、図10は第3実装例から第6実装例に対応した利用者端末3およびデータ提供装置10に関して、特に利用者端末3(移動可能なマイク108)の場所特定機能に関するモジュール構成例を示す。利用者端末3は、位置測定部221及び端末位置送信部222を有する。位置測定部221は、例えば屋内の所定位置に取り付けられた複数のビーコン発信器から発信されるビーコン信号に基づいて利用者端末3の現在位置を測定する。この方式の場合、利用者端末3は、ビーコン受信器を備え、所定の位置に設置されたビーコン発信器が発信するビーコン信号をビーコン受信器において受信し、ビーコン発信器のIDを特定する。位置測定部221は、受信したビーコン信号の電波強度と、IDで特定されるビーコン発信器と利用者端末3の間隔との関係を分析することによって、利用者端末3の現在位置を特定する。ビーコン発信器は、取付型デバイス4に含まれてもよいし、取付型デバイス4とは別個に設けられてもよい。 FIG. 10 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 relating to the specification of the location of the user terminal 3. In other words, FIG. 10 illustrates an example of a module configuration related to the user terminal 3 and the data providing device 10 corresponding to the third to sixth implementation examples, and in particular, the location configuration function of the user terminal 3 (the movable microphone 108). Show. The user terminal 3 includes a position measuring unit 221 and a terminal position transmitting unit 222. The position measuring unit 221 measures the current position of the user terminal 3 based on, for example, beacon signals transmitted from a plurality of beacon transmitters installed at predetermined indoor locations. In the case of this method, the user terminal 3 includes a beacon receiver, receives a beacon signal transmitted by a beacon transmitter installed at a predetermined position, and specifies the ID of the beacon transmitter. The position measuring unit 221 specifies the current position of the user terminal 3 by analyzing the relationship between the radio wave intensity of the received beacon signal and the interval between the beacon transmitter specified by the ID and the user terminal 3. The beacon transmitter may be included in the attached device 4 or may be provided separately from the attached device 4.
 端末位置送信部222は、利用者端末3の現在位置をデータ提供装置10へ送信する。第3実装例の場合に、端末位置送信部222は、例えば音声データ送信部における音声データの送信の前又は後に利用者端末3の現在位置を送信する。第4実装例の場合に、端末位置送信部222は、例えば言語データ送信部における言語データの送信の前又は後に利用者端末3の現在位置を送信する。第5実装例の場合に、端末位置送信部222は、例えば実行情報送信部における実行情報の送信の前又は後に利用者端末3の現在位置を送信する。 The terminal position transmitting unit 222 transmits the current position of the user terminal 3 to the data providing device 10. In the case of the third implementation example, the terminal position transmission unit 222 transmits the current position of the user terminal 3 before or after transmission of audio data in the audio data transmission unit, for example. In the case of the fourth implementation example, the terminal position transmitting unit 222 transmits, for example, the current position of the user terminal 3 before or after the transmission of the language data in the language data transmitting unit. In the case of the fifth implementation example, the terminal position transmission unit 222 transmits the current position of the user terminal 3 before or after transmission of the execution information in the execution information transmission unit, for example.
 データ提供装置10は、端末位置受信部223、場所特定部224、間取りデータ記憶部225及び実行情報記憶部204を有する。場所特定部224及び間取りデータ記憶部225は、実行情報取得部163に含まれてもよい。間取りデータ記憶部225は、屋内の場所毎に場所の範囲を記憶している。ここでいう場所とは、子供部屋や寝室のような屋内エリアのことである。端末位置受信部223は、利用者端末3から利用者端末3の現在位置を受信する。実行情報特定部202で実行情報が特定された後に、場所特定部224は実行情報に場所情報が含まれているか否かを判定する。実行情報に場所情報が含まれている場合には、場所特定部224は処理を終える。実行情報に場所情報が含まれていない場合には、場所特定部224は端末位置受信部223から利用者端末3の現在位置を得る。そして、場所特定部224は、間取りデータ記憶部225を参照して、利用者端末3の現在位置を含む場所を、実行情報記憶部204に記憶されている実行情報における場所情報として書き込む。実行情報における場所情報は探索場所を示すので、ロボット2の現在位置を含む場所において探索が行なわれることを意味する。以上で、図10の説明を終える。 The data providing device 10 includes a terminal position receiving unit 223, a location specifying unit 224, a floor plan data storage unit 225, and an execution information storage unit 204. The location specifying unit 224 and the floor plan data storage unit 225 may be included in the execution information acquisition unit 163. The floor plan data storage unit 225 stores a range of a place for each indoor place. The place referred to here is an indoor area such as a children's room or a bedroom. The terminal position receiving unit 223 receives the current position of the user terminal 3 from the user terminal 3. After the execution information is specified by the execution information specifying unit 202, the location specifying unit 224 determines whether or not the execution information includes the location information. When the location information is included in the execution information, the location specifying unit 224 ends the processing. If the location information is not included in the execution information, the location specifying unit 224 obtains the current location of the user terminal 3 from the terminal location receiving unit 223. Then, the location specifying unit 224 refers to the floor plan data storage unit 225 and writes the location including the current position of the user terminal 3 as the location information in the execution information stored in the execution information storage unit 204. Since the location information in the execution information indicates the search location, it means that the search is performed at a location including the current position of the robot 2. This is the end of the description of FIG.
 第6実装例では、ロボット2とデータ提供装置10によって、図8に示したモジュールの機能を実現する。ロボット2のマイク108において利用者の音声を入力し、データ提供装置10において音声認識の処理と実行情報特定の処理を行なう。そのために、ロボット2からデータ提供装置10へ音声データが伝送される。 In the sixth implementation example, the functions of the module shown in FIG. 8 are realized by the robot 2 and the data providing device 10. The voice of the user is input to the microphone 108 of the robot 2, and the data providing device 10 performs the process of voice recognition and the process of specifying execution information. For that purpose, voice data is transmitted from the robot 2 to the data providing device 10.
 ロボット2は、マイク108及び音声データ送信部(不図示)を有する。音声データ送信部は、ロボット2のマイク108で入力した音声データをデータ提供装置10へ送信する。データ提供装置10については、第2実装例の場合と同様である。 The robot 2 has a microphone 108 and a voice data transmission unit (not shown). The audio data transmission unit transmits the audio data input by the microphone 108 of the robot 2 to the data providing device 10. The data providing device 10 is the same as in the second implementation example.
 第7実装例も、ロボット2とデータ提供装置10によって、図8に示したモジュールの機能を実現する。ロボット2において利用者の音声を入力するとともに、音声認識の処理を行なう。データ提供装置10は、実行情報特定の処理を行なう。そのために、ロボット2からデータ提供装置10へ言語データが伝送される。 In the seventh implementation example, the robot 2 and the data providing device 10 also implement the functions of the module shown in FIG. The robot 2 inputs a user's voice and performs voice recognition processing. The data providing device 10 performs a process of specifying execution information. For this purpose, language data is transmitted from the robot 2 to the data providing device 10.
 ロボット2は、マイク108、音声認識部201及び言語データ送信部(不図示)を有する。ロボット2の音声認識部201は、ロボット2のマイク108で入力した音声データを言語データに変換する。言語データ送信部は、変換された言語データをデータ提供装置10へ送信する。データ提供装置10については、第4実装例の場合と同様である。 Robot 2 has microphone 108, voice recognition unit 201, and language data transmission unit (not shown). The voice recognition unit 201 of the robot 2 converts voice data input by the microphone 108 of the robot 2 into language data. The language data transmitting unit transmits the converted language data to the data providing device 10. The data providing device 10 is the same as in the case of the fourth implementation example.
 第8実装例も、ロボット2データ提供装置10によって、図8に示したモジュールの機能を実現する。ロボット2において利用者の音声を入力するとともに、音声認識の処理と実行情報特定の処理を行なう。そして、ロボット2からデータ提供装置10へ実行情報が伝送される。 In the eighth implementation example, the functions of the modules shown in FIG. 8 are realized by the robot 2 data providing device 10. The robot 2 inputs a user's voice, and performs voice recognition processing and execution information identification processing. Then, the execution information is transmitted from the robot 2 to the data providing device 10.
 ロボット2は、マイク108、音声認識部201、実行情報特定部202、変換ルール記憶部203及び実行情報送信部(不図示)を有する。ロボット2の音声認識部201は、ロボット2のマイク108で入力した音声データを言語データに変換する。実行情報特定部202は、変換ルール記憶部203を参照して、変換された言語データに基づく実行情報を特定する。実行情報送信部は、特定された実行情報をデータ提供装置10へ送信する。データ提供装置10については、第5実装例の場合と同様である。 The robot 2 includes the microphone 108, the voice recognition unit 201, the execution information specifying unit 202, the conversion rule storage unit 203, and the execution information transmission unit (not shown). The voice recognition unit 201 of the robot 2 converts voice data input by the microphone 108 of the robot 2 into language data. The execution information specifying unit 202 refers to the conversion rule storage unit 203 and specifies execution information based on the converted language data. The execution information transmitting unit transmits the specified execution information to the data providing device 10. The data providing device 10 is the same as in the fifth implementation example.
 第6実装例から第8実装例までにおいて、音声入力によって取得した実行情報に場所情報が含まれない場合に、実行情報取得部163は、音声を入力したマイク108の所在場所、つまりロボット2がある場所を実行情報における場所情報としてもよい。上述したように、第6実装例から第8実装例は、ロボット2において取得された音声データを起点として処理を開始する点において共通する。 In the sixth to eighth implementation examples, when the execution information acquired by the voice input does not include the location information, the execution information acquisition unit 163 determines the location of the microphone 108 to which the voice is input, that is, the robot 2 A certain place may be set as the place information in the execution information. As described above, the sixth to eighth implementation examples are common in that the processing is started with the voice data acquired by the robot 2 as a starting point.
 図11は、ロボット2の所在場所の特定に係るデータ提供装置10のモジュール構成例を示す図である。いいかえれば、図11は第6実装例から第8実装例に対応したロボット2およびデータ提供装置10に関して、特にロボット2(移動可能なマイク108)の場所特定機能に関するモジュール構成例を示す。ロボット2は、移動制御部23およびロボット位置送信部231を有する。ロボット位置送信部231は、移動制御部23からロボット2の現在位置を取得して、ロボット2の現在位置をデータ提供装置10へ送信する。移動制御部23は、所定の位置に設置された無線通信デバイスから受信する電波に基づいて現在位置を測定してもよいし、撮影画像に基づいて現在位置を測定してもよい。移動制御部23は、例えば屋内の所定位置に取り付けられた複数のビーコン発信器から発信されるビーコン信号に基づいてロボット2の現在位置を測定する。この方式の場合、ロボット2は、ビーコン受信器を備え、所定の位置に設置されたビーコン発信器が発信するビーコン信号をビーコン受信器において受信し、ビーコン発信器のIDを特定する。移動制御部23は、受信したビーコン信号の電波強度と、IDで特定されるビーコン発信器とロボット2の間隔との関係を分析することによって、ロボット2の現在位置を特定する。或いは、移動制御部23は、自己位置の推定と環境地図の作成を同時に行うSLAM(Simultaneous Localization and Mapping)技術を用いて、現在位置を特定してもよい。 FIG. 11 is a diagram illustrating an example of a module configuration of the data providing apparatus 10 for specifying the location of the robot 2. In other words, FIG. 11 shows an example of a module configuration relating to the robot 2 and the data providing device 10 corresponding to the sixth to eighth implementation examples, and particularly to the location specifying function of the robot 2 (movable microphone 108). The robot 2 has a movement control unit 23 and a robot position transmission unit 231. The robot position transmitting unit 231 acquires the current position of the robot 2 from the movement control unit 23, and transmits the current position of the robot 2 to the data providing device 10. The movement control unit 23 may measure the current position based on a radio wave received from a wireless communication device installed at a predetermined position, or may measure the current position based on a captured image. The movement control unit 23 measures the current position of the robot 2 based on, for example, beacon signals transmitted from a plurality of beacon transmitters installed at predetermined positions indoors. In the case of this method, the robot 2 includes a beacon receiver, receives a beacon signal transmitted by a beacon transmitter installed at a predetermined position, and specifies the ID of the beacon transmitter. The movement control unit 23 specifies the current position of the robot 2 by analyzing the relationship between the radio wave intensity of the received beacon signal and the interval between the beacon transmitter specified by the ID and the robot 2. Alternatively, the movement control unit 23 may specify the current position by using a SLAM (Simultaneous Localization and Mapping) technique that simultaneously estimates its own position and creates an environment map.
 第6実装例の場合に、ロボット位置送信部231は、例えば音声データ送信部における音声データの送信の前又は後にロボット2の現在位置を送信する。第7実装例の場合に、ロボット位置送信部231は、例えば言語データ送信部における言語データの送信の前又は後にロボット2の現在位置を送信する。第8実装例の場合に、ロボット位置送信部231は、例えば実行情報送信部における実行情報の送信の前又は後にロボット2の現在位置を送信する。 In the case of the sixth implementation example, the robot position transmitting unit 231 transmits, for example, the current position of the robot 2 before or after transmitting the voice data in the voice data transmitting unit. In the case of the seventh implementation example, the robot position transmitting unit 231 transmits the current position of the robot 2 before or after the transmission of the language data in the language data transmitting unit, for example. In the case of the eighth implementation example, the robot position transmission unit 231 transmits the current position of the robot 2 before or after transmission of the execution information in the execution information transmission unit, for example.
 データ提供装置10は、上述した場所特定部224及び間取りデータ記憶部225の他に、ロボット位置受信部232を有する。ロボット位置受信部232は、ロボット2からロボット2の現在位置を受信する。実行情報特定部202で実行情報が特定された後に、場所特定部224は実行情報に場所情報が含まれているか否かを判定する。実行情報に場所情報が含まれている場合には、場所特定部224は処理を終える。実行情報に場所情報が含まれていない場合には、場所特定部224は端末位置受信部223からロボット2の現在位置を得る。そして、場所特定部224は、間取りデータ記憶部225を参照して、ロボット2の現在位置を含む場所を、実行情報記憶部204に記憶されている実行情報における場所情報として書き込む。実行情報における場所情報は探索場所を示すので、ロボット2の現在位置を含む場所において探索が行なわれることを意味する。 The data providing apparatus 10 includes a robot position receiving unit 232 in addition to the place specifying unit 224 and the floor plan data storage unit 225 described above. The robot position receiving unit 232 receives the current position of the robot 2 from the robot 2. After the execution information is specified by the execution information specifying unit 202, the location specifying unit 224 determines whether or not the execution information includes the location information. When the location information is included in the execution information, the location specifying unit 224 ends the processing. When the location information is not included in the execution information, the location specifying unit 224 obtains the current position of the robot 2 from the terminal position receiving unit 223. Then, the location specifying unit 224 refers to the floor plan data storage unit 225 and writes a location including the current position of the robot 2 as location information in the execution information stored in the execution information storage unit 204. Since the location information in the execution information indicates the search location, it means that the search is performed at a location including the current position of the robot 2.
 報知動作実行部26は、第1実装例から第8実装例までの各例において取得した実行情報に、報知動作を実行する時間に係る時間情報が含まれない場合に、実行情報を取得した時点ですぐに報知動作を実行してもよい。または、報知動作実行部26は、実行情報取得部163において取得した実行情報に、報知動作を実行する時間に係る時間情報が含まれない場合に、実行情報を取得した時点から所定時間(例えば1分)が経過した時点で、報知動作を実行してもよい。所定時間は、ロボット2のキャラクタが利用者からの指示を理解して反応すると想定した場合に、キャラクタがその理解と反応に要したとイメージされる程度の時間長である。以上で、音声入力による実行情報の取得に関する説明を終える。 The notifying operation execution unit 26 determines the time when the execution information is obtained when the execution information obtained in each of the first to eighth implementation examples does not include time information related to the time when the notification operation is executed. The notification operation may be executed immediately. Alternatively, when the execution information acquired by the execution information acquisition unit 163 does not include the time information related to the time when the notification operation is executed, the notification operation execution unit 26 performs the predetermined time (for example, 1 to 1) from the time when the execution information is acquired. The notification operation may be executed at the point in time when the minutes have elapsed. The predetermined time is such a length as to be assumed that the character of the robot 2 understands and reacts to the instruction from the user and that the character takes the understanding and the reaction. This concludes the description of acquiring execution information by voice input.
<報知後の報知対象者に関する情報提供>
 次に、報知後の報知対象者に関する情報提供について説明する。報知動作を実行した後にロボット2が報知対象者の姿を撮影して、データ提供装置10から利用者端末3へ撮影画像を提供してもよい。更に、報知動作を実行した後にロボット2が報知対象者の状態情報を取得して、データ提供装置10から利用者端末3へ状態情報を提供してもよい。
<Provision of information on the notification target after notification>
Next, a description will be given of the provision of information on the notification target person after the notification. After executing the notification operation, the robot 2 may capture the image of the notification target person and provide the captured image from the data providing device 10 to the user terminal 3. Furthermore, the robot 2 may acquire the status information of the notification target person after executing the notification operation, and provide the status information from the data providing device 10 to the user terminal 3.
 例えば、夫である利用者を報知対象者としてロボット2が目覚まし動作を行った場合に、目覚まし動作に続けてロボット2が、ロボット2が有するカメラで夫である利用者を撮影する。ロボット2は、カメラから得られるプレビュー画像に利用者が写っていることを確認してから、本撮影を行う。従って、本撮影によってロボット2のカメラに記録される撮影画像には、目覚まし動作後における夫である利用者の姿が写っている。撮影画像は、静止画像でもよいし、動画画像でもよい。撮影画像は、データ提供装置10へ送られて保存される。一方、妻である利用者が、利用者端末3のアプリケーションにおいて撮影画像の閲覧を指示すると、利用者端末3のアプリケーションからデータ提供装置10へ目覚まし動作後の撮影画像の要求が送られる。データ提供装置10は、この要求に応じて、保存している撮影画像を利用者端末3のアプリケーション宛に送る。利用者端末3のアプリケーションは、受信した撮影画像を利用者端末3の表示装置に表示させる。このようにすれば、妻である利用者は、ロボット2が目覚まし動作を行った後に夫である利用者が起床したかどうかを、利用者端末3の表示装置に表示された撮影画像によって確認できる。 For example, when the robot 2 performs a wake-up operation with the husband user as a notification target person, the robot 2 captures the husband user with the camera of the robot 2 following the wake-up operation. The robot 2 performs the actual shooting after confirming that the user is included in the preview image obtained from the camera. Therefore, the photographed image recorded in the camera of the robot 2 by the actual photographing shows the figure of the user who is the husband after the wake-up operation. The captured image may be a still image or a moving image. The captured image is sent to the data providing device 10 and stored. On the other hand, when the user who is a wife instructs the user terminal 3 to browse the captured image in the application, the application of the user terminal 3 sends a request for the captured image after the wake-up operation to the data providing apparatus 10. The data providing device 10 sends the stored captured image to the application of the user terminal 3 in response to the request. The application of the user terminal 3 displays the received captured image on the display device of the user terminal 3. In this way, the wife user can check whether or not the husband user has woken up after the robot 2 performs the wake-up operation, by using the captured image displayed on the display device of the user terminal 3. .
 図12は、報知動作を実行した後の撮影画像及び状態情報を提供する自律行動型ロボット1のモジュール構成例を示す図である。図12はロボット2およびデータ提供装置10に関して、特に情報提供機能に関するモジュール構成例を示す。まず、撮影画像の提供に関して説明する。ロボット2は、撮影部21及び撮影画像送信部241を有する。ロボット2が報知動作を実行した後に、撮影部21は報知対象者を被写体として撮影を行い、報知動作を実行した後の撮影画像を生成する。撮影部21は、例えば顔認識処理によってライブビュー画像に報知対象者の顔が写っていることを確認してから本撮影を行う。撮影画像送信部241は、報知動作を実行した後の撮影画像をデータ提供装置10へ送信する。 FIG. 12 is a diagram illustrating an example of a module configuration of the autonomous behavior robot 1 that provides the captured image and the state information after performing the notification operation. FIG. 12 shows an example of a module configuration related to the robot 2 and the data providing apparatus 10, particularly, an information providing function. First, the provision of a captured image will be described. The robot 2 has a photographing unit 21 and a photographed image transmitting unit 241. After the robot 2 performs the notification operation, the imaging unit 21 performs imaging with the notification target person as a subject, and generates a captured image after performing the notification operation. The imaging unit 21 performs the actual imaging after confirming that the face of the notification target person is captured in the live view image by, for example, face recognition processing. The photographed image transmitting unit 241 transmits the photographed image after performing the notification operation to the data providing apparatus 10.
 データ提供装置10は、撮影画像受信部242、撮影画像記憶部243及び撮影画像提供部244を有する。ロボット2が報知動作を実行した後に、撮影画像受信部242は、ロボット2から送られた撮影画像を受信し、撮影画像記憶部243は、受信した撮影画像を記憶する。撮影画像提供部244は、利用者端末3のアプリケーションから報知動作を実行した後の撮影画像を要求された場合に、撮影画像記憶部243に記憶している撮影画像を読み出して、利用者端末3のアプリケーション宛に送信する。このようにすれば、利用者端末3のアプリケーションにおいて、報知動作を実行した後の報知対象者が写っている撮像画像を閲覧できる。また、データ提供装置10において、報知動作を実行した後の撮影画像を報知対象者の反応の記録として蓄えることもできる。 The data providing apparatus 10 includes a captured image receiving unit 242, a captured image storage unit 243, and a captured image providing unit 244. After the robot 2 performs the notification operation, the captured image receiving unit 242 receives the captured image sent from the robot 2, and the captured image storage unit 243 stores the received captured image. The photographed image providing unit 244 reads out the photographed image stored in the photographed image storage unit 243 and requests the photographed image after executing the notification operation from the application of the user terminal 3. To the application. In this way, in the application of the user terminal 3, it is possible to browse the captured image in which the notification target person after executing the notification operation is captured. Further, in the data providing apparatus 10, the captured image after the notification operation is performed can be stored as a record of the reaction of the notification target person.
 続いて、状態情報の提供に関して説明する。例えば、夫である利用者を報知対象者としてロボット2が目覚まし動作を行った場合に、目覚まし動作に続けてロボット2が、夫である利用者について睡眠中又は起床中を示す状態情報を取得する。状態情報は、データ提供装置10へ送られて保存される。一方、妻である利用者が、利用者端末3のアプリケーションにおいて状態情報の閲覧を指示すると、利用者端末3のアプリケーションからデータ提供装置10へ目覚まし動作後の状態情報の要求が送られる。データ提供装置10は、この要求に応じて、保存している状態情報を利用者端末3のアプリケーション宛に送る。利用者端末3のアプリケーションは、受信した状態情報によって夫である利用者が睡眠中であるか或いは起床中であるかを示すメッセージを利用者端末3の表示装置に表示させる。状態情報が睡眠中であれば、例えば「お父さんはまだ寝ています。」というメッセージが表示される。状態情報が起床中であれば、例えば「お父さんはもう起きました。」というメッセージが表示される。このようにすれば、妻である利用者は、ロボット2が目覚まし動作を行った後に夫である利用者が起床したかどうかを、利用者端末3の表示装置に表示されたメッセージによって確認できる。 Next, the provision of the status information will be described. For example, when the robot 2 performs a wake-up operation with the husband user as a notification target, the robot 2 acquires state information indicating that the husband user is sleeping or waking up following the wake-up operation. . The status information is sent to the data providing device 10 and stored. On the other hand, when the user who is his wife instructs the user terminal 3 to browse the state information in the application, the application of the user terminal 3 sends a request for the state information after the wake-up operation to the data providing apparatus 10. The data providing device 10 sends the stored state information to the application of the user terminal 3 in response to the request. The application of the user terminal 3 causes the display device of the user terminal 3 to display a message indicating whether the husband user is sleeping or awake according to the received state information. If the status information is sleeping, for example, a message “Dad is still sleeping.” Is displayed. If the status information is waking up, for example, a message “Dad is already up” is displayed. In this way, the wife user can confirm whether or not the husband user has woken up after the robot 2 has performed the wake-up operation, by a message displayed on the display device of the user terminal 3.
 ロボット2は、状態情報取得部24及び状態情報送信部251を有する。ロボット2が報知動作を実行した後に、状態情報取得部24は、報知対象者の状態に係る状態情報を取得する。前述の通り状態情報は、例えば睡眠中あるいは起床中などを示す。状態情報送信部251は、報知動作を実行した後の状態情報をデータ提供装置10へ送信する。 The robot 2 has a state information acquisition unit 24 and a state information transmission unit 251. After the robot 2 performs the notification operation, the state information acquisition unit 24 acquires state information relating to the state of the notification target person. As described above, the status information indicates, for example, during sleep or waking up. The state information transmitting unit 251 transmits the state information after executing the notification operation to the data providing apparatus 10.
 データ提供装置10は、状態情報受信部252、状態情報記憶部253及び状態情報提供部254を有する。ロボット2が報知動作を実行した後に、状態情報受信部252は、ロボット2から送られた状態情報を受信し、状態情報記憶部253は、受信した状態情報を記憶する。状態情報提供部254は、利用者端末3のアプリケーションから報知動作を実行した後の報知対象者の状態情報を要求された場合に、状態情報記憶部253に記憶している状態情報を読み出して、利用者端末3のアプリケーション宛に送信する。このようにすれば、利用者端末3のアプリケーションにおいて、報知動作を実行した後の報知対象者の状態に係る状態情報を閲覧できる。例えば妻である利用者端末3の利用者は、ロボット2が目覚まし動作を行った後に夫である報知対象者が起床したかを、利用者端末3の表示装置に表示された状態情報によって確認できる。また、データ提供装置10において、報知動作を実行した後の状態情報を報知対象者の反応の記録として蓄えることもできる。 The data providing device 10 includes a status information receiving unit 252, a status information storage unit 253, and a status information providing unit 254. After the robot 2 performs the notification operation, the state information receiving unit 252 receives the state information sent from the robot 2, and the state information storage unit 253 stores the received state information. The status information providing unit 254 reads the status information stored in the status information storage unit 253 when the status information of the notification target person after executing the notification operation is requested from the application of the user terminal 3, It is transmitted to the application of the user terminal 3. In this way, in the application of the user terminal 3, the state information on the state of the notification target person after executing the notification operation can be browsed. For example, the user of the user terminal 3 who is a wife can check whether or not the notification target person who is a husband has woken up after the robot 2 has performed the wake-up operation by the state information displayed on the display device of the user terminal 3. . Further, in the data providing apparatus 10, the state information after executing the notification operation can be stored as a record of the reaction of the notification target person.
 撮影画像提供部244と状態情報提供部254は、利用者端末3へ撮影画像と状態情報を併せて送信してもよい。例えば、妻である利用者が、利用者端末3のアプリケーションにおいて撮影画像と状態情報の閲覧を指示すると、利用者端末3のアプリケーションからデータ提供装置10へ目覚まし動作後の撮影画像と状態情報の要求が送られる。データ提供装置10は、この要求に応じて、保存している撮影画像と状態情報を利用者端末3のアプリケーション宛に送る。利用者端末3のアプリケーションは、撮影画像とともに、受信した状態情報によって夫である利用者が睡眠中であるか或いは起床中であるかを示すメッセージを利用者端末3の表示装置に表示させる。夫である利用者が睡眠中である場合には、寝ている夫の姿が写っている撮影画像と「お父さんはまだ寝ています。」というメッセージが同時に表示される。夫である利用者が起床中である場合には、起きている夫の姿が写っている撮影画像と「お父さんはもう起きています。」というメッセージが同時に表示される。 The captured image providing unit 244 and the state information providing unit 254 may transmit the captured image and the state information to the user terminal 3 together. For example, when the user who is a wife instructs the user terminal 3 to browse the captured image and the state information in the application of the user terminal 3, the application of the user terminal 3 requests the data providing apparatus 10 for the captured image and the state information after the wake-up operation. Is sent. In response to the request, the data providing device 10 sends the stored captured image and status information to the application of the user terminal 3. The application of the user terminal 3 causes the display device of the user terminal 3 to display a message indicating whether the husband user is sleeping or waking up based on the received state information, together with the captured image. When the user who is a husband is sleeping, a photographed image of the sleeping husband and a message "Dad is still sleeping" are displayed at the same time. When the user who is a husband is awake, a photographed image showing the figure of the husband who is awake and the message "Dad is already awake" are displayed at the same time.
 また、現時点の撮影画像と状態情報が、データ提供装置10から利用者端末3のアプリケーションへ自動的に送られ、利用者端末3において即時閲覧可能になるライブモードを設けてもよい。ライブモードは、利用者端末3のアプリケーションにおける利用者の操作によって設定される。利用者端末3のアプリケーションにおいてライブモードが設定されると、ライブモードの設定指示がデータ提供装置10へ送られる。データ提供装置10がライブモードの設定指示を受けると、撮影画像の即時伝送を行う。つまり、撮影画像提供部244は、ロボット2から受信して撮影画像記憶部243に記憶された撮影画像を、すぐに利用者端末3のアプリケーションへ伝送する。利用者端末3のアプリケーションは、受信した撮影画像をすぐに利用者端末3の表示装置に表示させる。また、ライブモードの場合、利用者端末3のアプリケーションにおいて報知対象者の現時点における状態情報を表すメッセージが表示されるようにしてもよい。そのため、状態情報提供部254は、ロボット2から受信して状態情報記憶部253に記憶された状態情報を、すぐに利用者端末3のアプリケーションへ伝送してもよい。利用者端末3のアプリケーションは、受信した状態情報を表すメッセージをすぐに利用者端末3の表示装置に表示させてもよい。 A live mode may be provided in which the current captured image and status information are automatically sent from the data providing device 10 to the application of the user terminal 3 so that the user terminal 3 can immediately view the image. The live mode is set by a user operation in an application of the user terminal 3. When the live mode is set in the application of the user terminal 3, a live mode setting instruction is sent to the data providing apparatus 10. When the data providing apparatus 10 receives the live mode setting instruction, it immediately transmits the captured image. That is, the captured image providing unit 244 immediately transmits the captured image received from the robot 2 and stored in the captured image storage unit 243 to the application of the user terminal 3. The application of the user terminal 3 causes the received captured image to be immediately displayed on the display device of the user terminal 3. In the case of the live mode, a message representing the current status information of the notification target person may be displayed in the application of the user terminal 3. Therefore, the state information providing unit 254 may transmit the state information received from the robot 2 and stored in the state information storage unit 253 to the application of the user terminal 3 immediately. The application of the user terminal 3 may cause the display device of the user terminal 3 to immediately display the received message indicating the status information.
 目覚まし動作に関するライブモードにおいて、報知対象者の状態情報が睡眠中から起床中に切り替わった時点で、撮影画像提供部244は、撮影画像の送信を停止してもよい。また、同時点で、状態情報提供部254は、状態情報の送信を停止してもよい。このようにすれば、無用なデータの伝送処理が省かれる。 In the live mode related to the wake-up operation, the captured image providing unit 244 may stop transmitting the captured image when the state information of the notification target switches from sleeping to waking up. At the same time, the status information providing unit 254 may stop transmitting the status information. In this way, unnecessary data transmission processing is omitted.
 撮影部21は、報知動作を実行した後に報知対象者を撮影する際に、報知対象者とロボットとの親密度に応じて報知対象者を撮影する条件を変更してもよい。報知対象者とロボットとの親密度と撮影時間の間に正の相関を持たせてもよい。つまり、報知対象者とロボットとの親密度が高いほど動画像の撮影時間が長くなり、同じく親密度が低いほど撮影時間が短くなるように動画像の撮影時間が設定されてもよい。また、報知対象者とロボットとの親密度と撮影距離の間に負の相関を持たせてもよい。つまり、報知対象者とロボットとの親密度が高いほど撮影距離が短くなり、同じく親密度が低いほど撮影距離が長くなるように撮影距離が設定されてもよい。尚、移動制御部23は、ロボット2と報知対象者との間隔が撮影距離となる位置を求めて、その位置へロボット2が移動するように移動機構29を制御する。 (4) When imaging the notification target person after performing the notification operation, the imaging unit 21 may change the conditions for imaging the notification target person according to the intimacy between the notification target person and the robot. A positive correlation may be provided between the intimacy between the notification target person and the robot and the shooting time. That is, the shooting time of the moving image may be set such that the higher the intimacy between the notification target person and the robot, the longer the shooting time of the moving image, and similarly, the lower the intimacy, the shorter the shooting time of the moving image. Also, a negative correlation may be provided between the intimacy between the notification target person and the robot and the shooting distance. That is, the shooting distance may be set such that the higher the intimacy between the notification target person and the robot, the shorter the shooting distance, and the lower the intimacy, the longer the shooting distance. Note that the movement control unit 23 obtains a position where the distance between the robot 2 and the notification target person is the shooting distance, and controls the movement mechanism 29 so that the robot 2 moves to that position.
<報知対象者とロボットの親密度に関する補足>
 前述の「報知対象者とロボットの親密度」の項において、親密度が所定の親密度条件を充足する場合に、他のロボット2と協働して報知動作を実行することについて説明した。この点について以下に補足する。
<Supplementary information on the intimacy between the notification target and the robot>
In the above-described section “Intimacy between notification target person and robot”, it has been described that when the intimacy satisfies a predetermined intimacy condition, the notification operation is performed in cooperation with another robot 2. This is supplemented below.
 所定の親密度条件は、親密度が基準値以上であるという条件であってもよい。所定の親密度条件は、親密度が基準未満であるという条件であってもよい。また、他のロボット2と報知対象者との親密度との関係を含む条件であってもよい。例えば本ロボット2aと報知対象者の親密度Aが、他のロボット2bと報知対象者の親密度Bに所定の比率を乗じた値以上であることを親密度条件としてもよい。つまり、複数のロボット2の一部又は全部に対して重み付けが行われた親密度を比較した結果を親密度条件としてもよい。 The predetermined familiarity condition may be a condition that the familiarity is equal to or higher than a reference value. The predetermined intimacy condition may be a condition that the intimacy level is less than a reference. Further, the condition may include a relationship between the intimacy between the other robot 2 and the notification target person. For example, the familiarity condition may be that the familiarity A between the robot 2a and the subject to be notified is a value obtained by multiplying the familiarity B between the other robot 2b and the subject to be notified by a predetermined ratio. That is, a result of comparing the intimacy with which some or all of the plurality of robots 2 are weighted may be used as the intimacy condition.
<2台のロボット2が協働する報知動作>
 2台のロボット2が協働する報知動作の具体例を3つ示す。いずれの具体例でもロボット2aとロボット2bが同じ屋内で動作する。
<Notification operation in which two robots 2 cooperate>
Three specific examples of the notification operation in which two robots 2 cooperate are shown. In each specific example, the robot 2a and the robot 2b operate in the same room.
 図13は、2台のロボット2が協働する報知動作の第1具体例を示す図である。第1具体例では、報知対象者を発見したロボット2aが発見場所をロボット2bへ通知し、通知を受けたロボット2bが発見場所へ移動して報知動作を行う。つまり、ロボット2aが報知動作を行うときにロボット2bを呼び寄せて、呼び寄せられたロボット2bもロボット2aと共に報知動作を行うように演出する。 FIG. 13 is a diagram showing a first specific example of a notification operation in which two robots 2 cooperate. In the first specific example, the robot 2a that has found the notification target notifies the robot 2b of the discovery location, and the notified robot 2b moves to the discovery location and performs a notification operation. That is, when the robot 2a performs the notification operation, the robot 2b is called in, and the called robot 2b also performs an effect to perform the notification operation together with the robot 2a.
 例えば、ロボット2aが、寝室で寝ている報知対象者を発見して、報知対象者に対して目覚まし動作を行おうとするときに、ロボット2bに対して、寝室で報知対象者を発見したことを通知する。その後、ロボット2aは、報知対象者に対する目覚まし動作を開始する。一方、ロボット2bは、寝室で報知対象者が発見されたという通知を受けると、寝室への移動を開始する。ロボット2bは、寝室へ着くと、ロボット2aとともに報知対象者に対する目覚まし動作を始める。従って、目覚まし動作を行うロボット2が1台から2台に増えて、報知対象者に対する目覚ましの効果が途中から強まる。よって、報知対象者の眠りが深く、簡単に起きない場合でも、報知対象者を起こしやすくできる。 For example, when the robot 2a discovers the notification target person sleeping in the bedroom and tries to wake up the notification target person, the robot 2b detects that the notification target person is found in the bedroom. Notice. Thereafter, the robot 2a starts a wake-up operation for the notification target person. On the other hand, when the robot 2b receives the notification that the notification target person has been found in the bedroom, the robot 2b starts moving to the bedroom. When arriving at the bedroom, the robot 2b starts an alarming operation for the notification target together with the robot 2a. Therefore, the number of the robots 2 performing the wake-up operation increases from one to two, and the effect of wake-up on the notification target person increases from the middle. Therefore, even if the notification target person falls asleep deeply and does not easily wake up, the notification target person can be easily woken up.
 ロボット2aは、報知対象者を発見した場所の通知、つまり発見場所の通知をロボット2bへ送信する発見場所通知送信部(不図示)を有し、ロボット2bは、ロボット2aから発見場所の通知を受信する発見場所通知受信部(不図示)を有する。 The robot 2a has a discovery location notification transmission unit (not shown) for transmitting a notification of the location where the notification target is found, that is, a notification of the discovery location, to the robot 2b, and the robot 2b transmits a notification of the discovery location from the robot 2a. It has a discovery location notification receiving unit (not shown) for receiving.
 ステップS31において、図5のステップS21及び「音声入力による実行情報の取得」の項で説明した通り、ロボット2aの実行情報取得部163が実行情報を取得すると、ステップS32において、図5のステップS22で説明した通り、ロボット2aの探索部25が、探索場所への移動経路を算出する。そして、ステップS33において、図5のステップS23で説明した通り、ロボット2aの探索部25が、報知対象者の探索を開始し、ロボット2aの移動制御部23が移動機構29を制御して探索場所への移動を開始する。 In step S31, as described in step S21 of FIG. 5 and the section of "acquisition of execution information by voice input", when execution information acquisition section 163 of robot 2a acquires execution information, in step S32, step S22 of FIG. As described in, the search unit 25 of the robot 2a calculates the movement route to the search location. Then, in step S33, as described in step S23 of FIG. 5, the search unit 25 of the robot 2a starts searching for a notification target person, and the movement control unit 23 of the robot 2a controls the movement mechanism 29 to search for the search location. Start moving to.
 ステップS34において、図5のステップS24で説明した通り、ロボット2aの移動制御部23が、探索場所にロボット2aが到達したと判断し、更にステップS35において、図5のステップS25で説明した通り、ロボット2aの探索部25が報知対象者を発見すると、ロボット2aの発見場所通知送信部は、発見場所の通知をロボット2bへ送信する(ステップS36)。そして、ステップS37において、図5のステップS27からステップS29までに例示したように、ロボット2aの報知動作実行部26は、報知動作を行う。 In step S34, as described in step S24 of FIG. 5, the movement control unit 23 of the robot 2a determines that the robot 2a has reached the search location, and further in step S35, as described in step S25 of FIG. When the search unit 25 of the robot 2a finds the notification target person, the discovery location notification transmission unit of the robot 2a transmits a notification of the discovery location to the robot 2b (step S36). Then, in step S37, the notification operation execution unit 26 of the robot 2a performs the notification operation as exemplified in steps S27 to S29 of FIG.
 ロボット2bの発見場所通知受信部がロボット2aから発見場所の通知を受信すると、ロボット2bの探索部25が、発見場所への移動経路を算出する(ステップS41)。そして、ロボット2bの移動制御部23が移動機構29を制御して発見場所への移動を開始する(ステップS42)。ロボット2bの移動制御部23が、発見場所にロボット2bが到達したと判断すると(ステップS43)、ロボット2bの報知動作実行部26は、報知動作を行う(ステップS44)。 When the discovery location notification receiving unit of the robot 2b receives the notification of the discovery location from the robot 2a, the search unit 25 of the robot 2b calculates a moving route to the discovery location (step S41). Then, the movement control unit 23 of the robot 2b controls the movement mechanism 29 to start moving to the discovery location (Step S42). When the movement control unit 23 of the robot 2b determines that the robot 2b has reached the discovery location (step S43), the notification operation execution unit 26 of the robot 2b performs a notification operation (step S44).
 図13の例では、ロボット2aのみが探索を行ない、ロボット2bは探索を行なわない例を示したが、ロボット2bがロボット2aと同時に探索を行なってもよい。そして、ロボット2bが先に報知対象者を発見した場合に、図13の例とは反対にロボット2bがロボット2aへ発見場所通知を送信し、発見場所通知を受信したロボット2aが発見場所へ移動して、報知動作を行ってもよい。そのために、ロボット2bは、更に発見場所通知送信部を有し、ロボット2aは、更に発見場所通知受信部を有してもよい。 In the example of FIG. 13, only the robot 2a performs the search and the robot 2b does not perform the search. However, the robot 2b may perform the search simultaneously with the robot 2a. Then, when the robot 2b first finds the notification target person, the robot 2b transmits a discovery location notification to the robot 2a, and the robot 2a that has received the discovery location notification moves to the discovery location, contrary to the example of FIG. Then, the notification operation may be performed. For this purpose, the robot 2b may further include a discovery location notification transmission unit, and the robot 2a may further include a discovery location notification reception unit.
 発見場所通知は、データ提供装置10を介して伝送されてもよい。たとえば、ロボット2aの発見場所通知送信部は、データ提供装置10へ発見場所通知を送信し、データ提供装置10の発見場所通知転送部(不図示)は、発見場所接近通知を受信し、受信した発見場所通知をロボット2bへ転送してもよい。ロボット2bの発見場所通知受信部は、転送された発見場所通知を受信してもよい。 The discovery location notification may be transmitted via the data providing device 10. For example, the discovery location notification transmitting unit of the robot 2a transmits a discovery location notification to the data providing device 10, and the discovery location notification transferring unit (not shown) of the data providing device 10 receives and receives the discovery location approach notification. The discovery location notification may be transferred to the robot 2b. The discovery location notification receiving unit of the robot 2b may receive the transferred discovery location notification.
 図14は、2台のロボット2が協働する報知動作の第2具体例を示す図である。第2具体例では、ロボット2aと2bの両方が、報知対象者と所定基準以下の距離まで近づいた場合に、ロボット2aと2bが同期して報知動作を行う。以下では、報知対象者と所定基準以下の距離まで近づくことを、「報知対象者に接近する」と表現する。 FIG. 14 is a diagram showing a second specific example of the notification operation in which two robots 2 cooperate. In the second specific example, when both of the robots 2a and 2b approach the notification target person to a distance equal to or less than a predetermined reference, the robots 2a and 2b perform the notification operation in synchronization. Hereinafter, approaching the notification target person to a distance equal to or smaller than a predetermined reference is expressed as “approaching the notification target person”.
 図14の例では、先に報知対象者に接近したロボット2aが、ロボット2bが報知対象者に接近するまで待つ。ロボット2bが報知対象者に接近すると、ロボット2aと2bがタイミングを合わせて報知動作を行うように演出する。 In the example of FIG. 14, the robot 2a that has approached the notification target first waits until the robot 2b approaches the notification target. When the robot 2b approaches the notification target person, the robots 2a and 2b perform the notification operation at the same time.
 例えば、ロボット2aは、寝室で寝ている報知対象者を発見すると、報知対象者に近づく。そして、ロボット2aは、ロボット2bに対して、自身が報知対象者に接近したことを通知する。ロボット2aは、ロボット2bが報知対象者に接近するまで、目覚まし動作を行わずにそのまま待機する。一方、ロボット2bも、寝室で寝ている報知対象者を発見して、報知対象者に近づく。そして、ロボット2aとロボット2bが、共に報知対象者に接近した状態になると、ロボット2aとロボット2bは一斉に報知対象者に対する目覚まし動作を始める。従って、目覚ましの効果が強く、報知対象者を起こしやすくできる。また、ロボット2aと2bが揃って、いたずらを仕掛ける様子を演出することもできる。 For example, if the robot 2a finds a notification target person sleeping in the bedroom, it approaches the notification target person. Then, the robot 2a notifies the robot 2b that it has approached the notification target person. The robot 2a waits without performing an alarming operation until the robot 2b approaches the notification target person. On the other hand, the robot 2b also finds the notification target person sleeping in the bedroom and approaches the notification target person. Then, when both the robots 2a and 2b approach the notification target, the robots 2a and 2b simultaneously start an alarming operation on the notification target. Therefore, the alarming effect is strong, and the notification target person can be easily caused to wake up. In addition, it is possible to produce a situation in which the robots 2a and 2b are aligned and play mischief.
 ロボット2aは、報知対象者に接近したことの通知(以下、「接近通知」という。)を他方のロボット2bへ送信する接近通知送信部(不図示)と、他方のロボット2bから接近通知を受信する接近通知受信部(不図示)とを有する。ロボット2bも同様に、接近通知送信部と接近通知受信部とを有する。 The robot 2a receives an approach notification from the other robot 2b (not shown) and an approach notification transmitting unit (not shown) that sends a notification (hereinafter, referred to as “approach notification”) of the approach to the notification target person to the other robot 2b. And an approach notification receiving unit (not shown). Similarly, the robot 2b also has an approach notification transmitting unit and an approach notification receiving unit.
 図14のステップS51からステップS54までに示したロボット2aの処理は、図13のステップS31からステップS34までに示したロボット2aの処理の場合と同様である。ロボット2aの探索部25が、報知対象者を発見し、報知対象者に接近したと判断すると(ステップS55)、接近通知送信部は、接近通知をロボット2bへ送信する(ステップS56)。そして、ロボット2aは、ロボット2bから接近通知を受信するまで待機する。 処理 The processing of the robot 2a shown in steps S51 to S54 of FIG. 14 is the same as the processing of the robot 2a shown in steps S31 to S34 of FIG. When the search unit 25 of the robot 2a finds the notification target person and determines that the notification target person is approached (step S55), the approach notification transmitting unit transmits an approach notification to the robot 2b (step S56). Then, the robot 2a waits until receiving the approach notification from the robot 2b.
 図14のステップS61からステップS63までに示したロボット2bの処理は、図13のステップS31からステップS33までに示したロボット2aの処理の場合と同様である。ロボット2bは、探索場所に到着する前に、ロボット2aから接近通知を受信すると想定する。従って、ロボット2bの接近通知受信部が、ロボット2aから接近通知を受信した後に、ロボット2bの移動制御部23が、探索場所にロボット2aが到達したと判断する(ステップS64)。更にロボット2bの探索部25が報知対象者を発見し、報知対象者に接近したと判断すると(ステップS65)、ロボット2bの接近通知送信部は、接近通知をロボット2aへ送信する(ステップS66)。ロボット2bの報知動作実行部26は、所定の協働条件を満たしたと判定して、報知動作を行う。所定の協働条件とは、他方のロボット2aから接近通知を受信し、且つ自らのロボット2bが報知対象者に接近したことである。 処理 The processing of the robot 2b shown in steps S61 to S63 of FIG. 14 is the same as the processing of the robot 2a shown in steps S31 to S33 of FIG. It is assumed that the robot 2b receives an approach notification from the robot 2a before arriving at the search location. Therefore, after the approach notification receiving unit of the robot 2b receives the approach notification from the robot 2a, the movement control unit 23 of the robot 2b determines that the robot 2a has reached the search location (step S64). Further, when the search unit 25 of the robot 2b finds the notification target person and determines that the notification target person is approached (step S65), the approach notification transmission unit of the robot 2b transmits an approach notification to the robot 2a (step S66). . The notification operation execution unit 26 of the robot 2b determines that the predetermined cooperation condition is satisfied, and performs the notification operation. The predetermined cooperation condition is that the approach notification is received from the other robot 2a and that the own robot 2b approaches the notification target.
 ロボット2aの接近通知受信部がロボット2bから接近通知を受信すると、ロボット2aの報知動作実行部26は、所定の協働条件を満たしたと判定して、報知動作を行う。所定の協働条件とは、他方のロボット2bから接近通知を受信し、且つ自らのロボット2aが報知対象者に接近したことである。 When the approach notification receiving unit of the robot 2a receives the approach notification from the robot 2b, the notification operation execution unit 26 of the robot 2a determines that the predetermined cooperation condition is satisfied, and performs the notification operation. The predetermined cooperation condition is that an approach notification is received from the other robot 2b, and that the own robot 2a approaches the notification target person.
 接近通知は、データ提供装置10を介して伝送されてもよい。たとえば、ロボット2aの接近通知送信部は、データ提供装置10へ接近通知を送信し、データ提供装置10の接近通知転送部(不図示)は、接近通知を受信し、受信した接近通知をロボット2bへ転送してもよい。ロボット2bの接近通知受信部は、転送された接近通知を受信してもよい。 The approach notification may be transmitted via the data providing device 10. For example, the approach notification transmitting unit of the robot 2a transmits an approach notification to the data providing device 10, and the approach notification transferring unit (not shown) of the data providing device 10 receives the approach notification and transmits the received approach notification to the robot 2b. May be forwarded to The approach notification receiving unit of the robot 2b may receive the transferred approach notification.
 図15は、2台のロボット2が協働する報知動作の第3具体例を示す図である。第3具体例では、ロボット2aによる報知動作のタイミングと、ロボット2bによる報知動作のタイミングをずらす。したがって、2台のロボット2aと2bによる報知動作のタイミングは重ならない。たとえば、ロボット2aの音声出力とロボット2bの音声出力とが同時に行われることがないので、騒がしくなく且つ音声の内容を聞き取りやすい。 FIG. 15 is a diagram showing a third specific example of the notification operation in which two robots 2 cooperate. In the third specific example, the timing of the notification operation by the robot 2a is shifted from the timing of the notification operation by the robot 2b. Therefore, the timings of the notification operations by the two robots 2a and 2b do not overlap. For example, since the voice output of the robot 2a and the voice output of the robot 2b are not performed at the same time, it is not noisy and the content of the voice can be easily heard.
 第3具体例は、ロボット2aと2bの両方が報知動作を行う段階の制御に関する。つまり、第1具体例で図13のステップS44に示した報知動作を開始するタイミング以降、あるいは第2具体例で図14のステップS57に示した報知動作及びステップS67に示した報知動作を開始するタイミング以降の制御に関する。 The third specific example relates to the control of the stage where both the robots 2a and 2b perform the notification operation. That is, in the first specific example, the notification operation shown in step S44 of FIG. 13 is started, or in the second specific example, the notification operation shown in step S57 of FIG. 14 and the notification operation shown in step S67 are started. It relates to control after the timing.
 ロボット2aは、自ら報知動作を行ったことの通知(以下、「動作通知」という。)を他方のロボット2bへ送信する動作通知送信部(不図示)と、他方のロボット2bから動作通知を受信する動作通知受信部(不図示)とを有する。ロボット2bも同様に、動作通知送信部と動作通知受信部とを有する。 The robot 2a receives an operation notification from the other robot 2b, and an operation notification transmitting unit (not shown) that transmits a notification (hereinafter, referred to as “operation notification”) that the robot 2a has performed the notification operation to the other robot 2b. And an operation notification receiving unit (not shown). Similarly, the robot 2b has an operation notification transmitting unit and an operation notification receiving unit.
 図15の例では、最初にロボット2aが報知動作を行ない、その後ロボット2bと交代しながら、それぞれ2回の報知動作を行う。まず、ロボット2aの報知動作実行部26が報知動作を行う(ステップS71)。報知動作を終えると、ロボット2aの動作通知送信部は、動作通知をロボット2bへ送信する(ステップS72)。そして、ロボット2aは、ロボット2bから動作通知を受信するまで待機する。 In the example of FIG. 15, the robot 2a first performs the notification operation, and then performs the notification operation twice each while replacing the robot 2b. First, the notification operation execution unit 26 of the robot 2a performs a notification operation (step S71). When the notification operation is completed, the operation notification transmission unit of the robot 2a transmits an operation notification to the robot 2b (Step S72). Then, the robot 2a waits until an operation notification is received from the robot 2b.
 ロボット2bの動作通知受信部がロボット2aから動作通知を受信すると(ステップS81)、所定時間待機して、ロボット2bの報知動作実行部26が報知動作を行う(ステップS82)。待機する所定時間は、畳み掛けることなく、ロボット2aと2bとが報知対象者の様子を伺いながら報知を繰り返している印象を持たせる程度の間隔(例えば、1秒~5秒)である。報知動作を終えると、ロボット2bの動作通知送信部は、動作通知をロボット2aへ送信する(ステップS83)。そして、ロボット2bは、ロボット2aから動作通知を受信するまで待機する。 When the operation notification receiving section of the robot 2b receives the operation notification from the robot 2a (step S81), the robot 2b waits for a predetermined time and the notification operation executing section 26 of the robot 2b performs the notification operation (step S82). The predetermined waiting time is an interval (for example, 1 second to 5 seconds) that gives an impression that the robots 2a and 2b repeat the notification while observing the state of the notification target person without folding. When the notification operation is completed, the operation notification transmitting unit of the robot 2b transmits an operation notification to the robot 2a (Step S83). Then, the robot 2b waits until receiving the operation notification from the robot 2a.
 その後、ロボット2aの動作通知受信部がロボット2bから動作通知を受信すると(ステップS73)、所定時間待機して、ロボット2aの報知動作実行部26が報知動作を行う(ステップS74)。報知動作を終えると、ロボット2aの動作通知送信部は、動作通知をロボット2bへ送信する(ステップS75)。そして、ロボット2aは全体の処理を終える。 (4) Thereafter, when the operation notification receiving unit of the robot 2a receives the operation notification from the robot 2b (step S73), the operation waits for a predetermined time, and the notification operation execution unit 26 of the robot 2a performs the notification operation (step S74). When the notification operation is completed, the operation notification transmitting unit of the robot 2a transmits an operation notification to the robot 2b (Step S75). Then, the robot 2a ends the entire process.
 ロボット2bは、ステップS81からS83までの処理の場合と同様の処理を繰り返して(ステップS84~S86)、全体の処理を終える。 (4) The robot 2b repeats the same processing as the processing in steps S81 to S83 (steps S84 to S86), and ends the entire processing.
 例えば報知動作が目覚まし動作である場合に、報知対象者の状態情報が起床中であると判断したタイミングで、ロボット2aと2bが共に第3具体例の処理を終えるようにしてもよい。 For example, when the notification operation is a wake-up operation, both the robots 2a and 2b may end the process of the third specific example at the timing when it is determined that the state information of the notification target person is awake.
 例えば、ロボット2aによる1回目の目覚まし動作の後に、ロボット2bによる1回目の目覚まし動作が行われ、更にロボット2aによる2回目の目覚まし動作が行われたときに、報知対象者が起床すれば、ロボット2bによる2回目の目覚まし動作は行われない。つまり、既に起床した報知対象者に対して、無用な目覚まし動作は行われない。 For example, if the first wake-up operation by the robot 2a is performed after the first wake-up operation by the robot 2a, and the second wake-up operation by the robot 2a is performed, and the notification target person wakes up, The second wake-up operation by 2b is not performed. That is, an unnecessary alarm operation is not performed on the notification target person who has already woken up.
<メッセージ受信による報知動作>
 自律行動型ロボット1が利用者宛のメッセージを受信した場合に、その利用者を報知対象者として報知動作を行ってもよい。以下に、メッセージ受信による報知動作の方式例を4つ示す。
<Notification operation by message reception>
When the autonomous behavior robot 1 receives a message addressed to a user, a notification operation may be performed with the user as a notification target. Hereinafter, four examples of the system of the notification operation by the message reception will be described.
 まず、第1方式の具体例を説明する。親の利用者端末3から、本文に「冷蔵庫の中のケーキ食べていいわよ。」というメッセージを設定した電子メールを、子供である利用者宛に送信することによって、所定の探索場所「子供部屋」にいる子供に対するメッセージの読み上げをロボット2に行わせることができる。この例では、子供の利用者情報として、予め子供の電子メールアドレスと子供の探索場所「子供部屋」が設定されているものとする。 First, a specific example of the first method will be described. The parent user terminal 3 sends an e-mail with the message "You can eat cake in the refrigerator" in the text to the user who is a child. The robot 2 can read out a message to a child who is in the robot. In this example, it is assumed that the child's e-mail address and the child's search location “child room” are set in advance as the child user information.
 親によって子供の電子メールアドレスが宛先に設定され、本文に「冷蔵庫の中のケーキ食べていいわよ。」というメッセージが設定された電子メールを、利用者端末3が送信すると、自律行動型ロボット1においてこの電子メールが受信される。 The e-mail address of the child is set as the destination by the parent, and when the user terminal 3 transmits an e-mail in which the message "Eat cake in the refrigerator." Is sent, the autonomous robot 1 This e-mail is received at.
 自律行動型ロボット1は、宛先の電子メールアドレスから報知対象者が子供であると判断し、ロボット2は、子供の探索場所である「子供部屋」へ移動して、子供を探索する。ロボット2は子供を発見すると、電子メールアドレスの本文「冷蔵庫の中のケーキ食べていいわよ。」を読み上げる。 (4) The autonomous behavior robot 1 determines that the notification target is a child from the destination e-mail address, and the robot 2 moves to the “child room” where the child is to be searched and searches for the child. When the robot 2 finds a child, the robot 2 reads out the text of the e-mail address, "You can eat the cake in the refrigerator."
 このように第1方式では、電子メールアドレスに基づいて報知対象者が特定され、報知対象者の利用者情報に基づいて報知対象者の探索場所が特定される。そして、報知動作として電子メールの本文の読み上げが行われる。 As described above, in the first method, the notification target person is specified based on the email address, and the search location of the notification target person is specified based on the user information of the notification target person. Then, the main body of the e-mail is read out as a notification operation.
 次に、第2方式の具体例を説明する。妻の利用者端末3から、本文に「@寝床」という場所表現を設定した電子メールを、夫である利用者宛に送信することによって、寝室にいる夫に対する所定の報知動作「目覚まし動作」をロボット2に行わせることができる。この例では、「@」の後に探索場所を指定する場所表現が記述されるというメッセージ記述ルールがあるものとする。この例では、夫の利用者情報として、予め夫の電子メールアドレスと夫に対する報知動作「目覚まし動作」が設定されているものとする。 Next, a specific example of the second method will be described. By transmitting an e-mail with a place expression of “@bed” in the text from the wife's user terminal 3 to the user who is a husband, a predetermined notification operation “wake-up operation” to the husband in the bedroom is performed. This can be performed by the robot 2. In this example, it is assumed that there is a message description rule that a place expression designating a search place is described after "@". In this example, it is assumed that the husband's e-mail address and the notification operation “wake-up operation” for the husband are set in advance as the husband's user information.
 妻によって夫の電子メールアドレスが宛先に設定され、本文に「@寝床」という場所表現が設定された電子メールを、妻の利用者端末3が送信すると、自律行動型ロボット1においてこの電子メールが受信される。 When the wife's user terminal 3 sends an e-mail with the husband's e-mail address set as the destination by the wife and a place expression of "@bed" in the text, the e-mail is sent to the autonomous robot 1. Received.
 自律行動型ロボット1は、宛先の電子メールアドレスから報知対象者が夫であると判断し、電子メールアドレスの本文から場所表現「@寝床」に対応する探索場所「寝室」を特定する。更に自律行動型ロボット1は、夫の利用者情報に基づいて夫に対する報知動作「目覚まし動作」を特定する。そして、ロボット2は、探索場所である「寝室」へ移動して、夫を探索する。ロボット2は夫を発見すると、夫に対して報知動作「目覚まし動作」を行う。 The autonomous robot 1 determines from the e-mail address of the destination that the person to be notified is her husband, and specifies the search location “bedroom” corresponding to the location expression “@bed” from the text of the e-mail address. Further, the autonomous behavior robot 1 specifies a notification operation “alarm operation” for the husband based on the user information of the husband. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, the robot 2 performs a notification operation “alarm operation” to the husband.
 このように第2方式では、電子メールアドレスに基づいて報知対象者が特定され、電子メールの本文から探索場所が特定され、報知対象者の利用者情報に基づいて報知対象者に対する報知動作が特定される。 As described above, in the second method, the notification target person is specified based on the e-mail address, the search location is specified from the body of the e-mail, and the notification operation for the notification target person is specified based on the user information of the notification target person. Is done.
 次に、第3方式の具体例を説明する。妻の利用者端末3から、本文に「電話ください。@寝床」というメッセージを設定した電子メールを、夫である利用者宛に送信することによって、寝室にいる夫に対するメッセージ「電話ください。」の読み上げをロボット2に行わせることができる。第2具体例と同様に、「@」の後に探索場所を指定する場所表現が記述されるというメッセージ記述ルールがあるものとする。ロボット2は、「@」の前までの文言を読み上げ、場所表現を読み上げない。この例では、夫の利用者情報として、予め夫の電子メールアドレスが設定されているものとする。 Next, a specific example of the third method will be described. From the wife's user terminal 3, an e-mail with the message “Please call me. Bed” is sent to the user who is my husband, so that the message “Please call me” to my husband in the bedroom is sent. The reading can be performed by the robot 2. As in the second specific example, it is assumed that there is a message description rule in which a place expression designating a search place is described after "@". The robot 2 reads out the text before "@" and does not read out the place expression. In this example, it is assumed that the husband's e-mail address is set in advance as the husband's user information.
 妻によって夫の電子メールアドレスが宛先に設定され、本文に「電話ください。@寝床」というメッセージが設定された電子メールを、妻の利用者端末3が送信すると、自律行動型ロボット1においてこの電子メールが受信される。 When the wife's user terminal 3 sends an e-mail with the wife's e-mail address set as the destination and the message "Please call me. Bed" in the body, the autonomous robot 1 sends this e-mail. Email is received.
 自律行動型ロボット1は、宛先の電子メールアドレスから報知対象者が夫であると判断し、電子メールアドレスの本文の場所表現「@寝床」に対応する探索場所「寝室」を特定する。そして、ロボット2は、探索場所である「寝室」へ移動して、夫を探索する。ロボット2は夫を発見すると、電子メールアドレスの本文のうち場所表現「@寝床」を除く文言「電話ください。」を読み上げる。 The autonomous behavior robot 1 determines that the notification target person is the husband from the destination e-mail address, and specifies the search location “bedroom” corresponding to the location expression “@bed” in the body of the e-mail address. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, it reads out the text "please call" except for the location expression "$ bed" in the body of the e-mail address.
 このように第3方式では、電子メールアドレスに基づいて報知対象者が特定され、電子メールの本文から探索場所が特定される。そして、報知動作として電子メールの本文の読み上げが行われる。 Thus, in the third method, the notification target person is specified based on the e-mail address, and the search location is specified from the body of the e-mail. Then, the main body of the e-mail is read out as a notification operation.
 最後に、第4方式の具体例を説明する。妻の利用者端末3から、電子メールを夫である利用者宛に送信することによって、所定の探索場所「寝室」にいる夫に対する所定の報知動作「目覚まし動作」をロボット2に行わせることができる。この例では、ロボット2が報知動作を行うに当たって、電子メールの本文を参照しないので、電子メールの本文は任意である。電子メールの本文は、空でもよい。この例では、夫の利用者情報として、予め夫の電子メールアドレスと夫の探索場所「寝室」と夫に対する報知動作「目覚まし動作」が設定されているものとする。 Finally, a specific example of the fourth method will be described. By transmitting an e-mail from the wife's user terminal 3 to the user who is the husband, the robot 2 can perform a predetermined notification operation “alarm operation” for the husband in the predetermined search location “bedroom”. it can. In this example, since the robot 2 does not refer to the text of the e-mail when performing the notification operation, the text of the e-mail is arbitrary. The body of the email may be empty. In this example, it is assumed that an e-mail address of the husband, a search location "bedroom" of the husband, and a notification operation "wake-up operation" for the husband are set in advance as the user information of the husband.
 妻によって夫の電子メールアドレスが宛先に設定された電子メールを、妻の利用者端末3が送信すると、自律行動型ロボット1においてこの電子メールが受信される。 (4) When the wife's user terminal 3 transmits an e-mail with the husband's e-mail address set as the destination by the wife, the autonomous robot 1 receives the e-mail.
 自律行動型ロボット1は、宛先の電子メールアドレスから報知対象者が夫であると判断する。更に自律行動型ロボット1は、夫の利用者情報に基づいて、夫の探索場所「寝室」と夫に対する報知動作「目覚まし動作」を特定する。そして、ロボット2は、探索場所である「寝室」へ移動して、夫を探索する。ロボット2は夫を発見すると、夫に対して報知動作「目覚まし動作」を行う。 (4) The autonomous behavior robot 1 determines that the person to be notified is her husband from the destination e-mail address. Further, the autonomous behavior robot 1 specifies a search location “bedroom” of the husband and a notification operation “alarm operation” for the husband based on the user information of the husband. Then, the robot 2 moves to the “bedroom” that is a search place and searches for a husband. When the robot 2 finds the husband, the robot 2 performs a notification operation “alarm operation” to the husband.
 このように第4方式では、電子メールアドレスに基づいて報知対象者が特定され、報知対象者の利用者情報に基づいて報知対象者の探索場所と報知対象者に対する報知動作が特定される。 As described above, in the fourth method, the notification target person is specified based on the electronic mail address, and the search location of the notification target person and the notification operation for the notification target person are specified based on the user information of the notification target person.
 尚、メッセージの伝送方式は、限定されない。電子メール以外の方式でメッセージが伝送されてもよい。例えば、メッセージ交換アプリによってメッセージが伝送されてもよい。 Note that the message transmission method is not limited. The message may be transmitted by a method other than e-mail. For example, the message may be transmitted by a message exchange application.
 続いて、メッセージ受信による報知動作の詳細について説明する。以下では、先に第1方式から第4方式までを包括的に説明し、後に各方式の特徴にも及ぶ。 Next, details of the notification operation by receiving a message will be described. Hereinafter, the first to fourth schemes will be comprehensively described first, and the features of each scheme will be described later.
 利用者情報は、例えば利用者端末3のアプリケーションによって設定される。設定された利用者情報は、利用者端末3のアプリケーションにおける利用者情報送信部(不図示)によって、データ提供装置10に送信される。 The user information is set by, for example, an application of the user terminal 3. The set user information is transmitted to the data providing device 10 by a user information transmitting unit (not shown) in the application of the user terminal 3.
 データ提供装置10は、利用者情報受信部(不図示)、利用者情報記憶部(不図示)、メッセージ受信部(不図示)及び報知対象者特定部(不図示)を有する。利用者情報受信部が受信した利用者情報は、利用者情報記憶部に記憶される。利用者情報記憶部は、利用者に対応付けて、メッセージ通信用の利用者識別情報、探索場所、報知動作の種類などの利用者情報を記憶する。メッセージ通信用の利用者識別情報は、たとえば電子メールアドレスやメッセージ交換アプリIDである。また、利用者情報記憶部は、利用者情報として、利用者の氏名、利用者の身体的な特徴を示す情報、所持品、服装、ロボット2との親密度等の利用者に関する情報を記憶してもよい。利用者の身体的な特徴を示す情報とは、例えば報知対象者の顔を認識するための情報、報知対象者の指紋を認識するための情報、報知対象者の体格を認識するための情報などである。尚、利用者情報受信部、利用者情報記憶部、メッセージ受信部及び報知対象者特定部を、ロボット2が有してもよい。その場合、利用者端末3のアプリケーションにおける利用者情報送信部は、利用者情報をロボット2へ送信してもよい。 The data providing device 10 includes a user information receiving unit (not shown), a user information storage unit (not shown), a message receiving unit (not shown), and a notification target specifying unit (not shown). The user information received by the user information receiving unit is stored in the user information storage unit. The user information storage unit stores user information such as user identification information for message communication, a search location, and a type of notification operation in association with the user. The user identification information for message communication is, for example, an e-mail address or a message exchange application ID. The user information storage unit stores, as user information, information about the user, such as the name of the user, information indicating the physical characteristics of the user, personal belongings, clothes, intimacy with the robot 2, and the like. You may. Information indicating the physical characteristics of the user includes, for example, information for recognizing the face of the notification target, information for recognizing the fingerprint of the notification target, information for recognizing the physique of the notification target, and the like. It is. Note that the robot 2 may include a user information receiving unit, a user information storage unit, a message receiving unit, and a notification target person specifying unit. In that case, the user information transmitting unit in the application of the user terminal 3 may transmit the user information to the robot 2.
 図16は、メッセージ受信による報知動作を示すフローチャートである。メッセージ受信部が、ロボット2の利用者宛のメッセージを受信すると(ステップS71)、報知対象者特定部は、利用者情報記憶部を参照して、メッセージの宛先に設定されているメッセージ通信用の利用者識別情報に対応する利用者を、報知対象者として特定する(ステップS72)。 FIG. 16 is a flowchart showing a notification operation by receiving a message. When the message receiving unit receives a message addressed to the user of the robot 2 (step S71), the notification target person specifying unit refers to the user information storage unit and sets a message communication destination set for the message. The user corresponding to the user identification information is specified as a notification target person (step S72).
 探索部25は、利用者情報記憶部を参照して、報知対象者である利用者に対応する探索場所を特定する(ステップS73)。探索部25は、受信したメッセージに含まれる場所表現によって探索場所を特定してもよい。たとえば「@寝床」という場所表現がメッセージに含まれる場合に、探索部25は、探索場所として「寝室」を特定してもよい。 The search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target (step S73). The search unit 25 may specify the search location based on the location expression included in the received message. For example, when the location expression “場所 bed” is included in the message, the search unit 25 may specify “bedroom” as the search location.
 ステップS74において、図5のステップS22で説明した通り、探索部25が探索場所への移動経路を算出する。そして、ステップS75において、図5のステップS23で説明した通り、探索部25が報知対象者の探索を開始し、移動制御部23が移動機構29を制御して探索場所への移動を開始する。 In step S74, as described in step S22 of FIG. 5, the search unit 25 calculates the movement route to the search location. Then, in step S75, as described in step S23 of FIG. 5, the search unit 25 starts searching for a notification target person, and the movement control unit 23 controls the moving mechanism 29 to start moving to the search location.
 ステップS76において、図5のステップS24で説明した通り、移動制御部23が、探索場所にロボット2が到達したと判断し、更にステップS77において、図5のステップS25で説明した通り、探索部25が報知対象者を発見すると、ステップS78において、報知動作実行部26が報知動作を行う。この例における報知動作は、例えば受信したメッセージの読み上げである。報知動作の種類は、受信したメッセージで指示されてもよい。報知動作実行部26は、利用者情報記憶部を参照して、報知対象者である利用者に対応する報知動作の種類を特定してもよい。 In step S76, as described in step S24 of FIG. 5, the movement control unit 23 determines that the robot 2 has reached the search location, and further in step S77, as described in step S25 of FIG. Finds the notification target person, in step S78, the notification operation execution unit 26 performs the notification operation. The notification operation in this example is, for example, reading out a received message. The type of the notification operation may be indicated by the received message. The notification operation execution unit 26 may specify the type of the notification operation corresponding to the user who is the notification target person by referring to the user information storage unit.
 第1方式の場合には、ステップS73において、探索部25は、利用者情報記憶部を参照して、報知対象者である利用者に対応する探索場所を特定する。更にステップS78において、報知動作実行部26は、受信したメッセージの読み上げを行う。 In the case of the first method, in step S73, the search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target. Further, in step S78, the notification operation execution unit 26 reads out the received message.
 第2方式の場合には、ステップS73において、探索部25は、受信したメッセージに含まれる場所表現によって探索場所を特定する。更にステップS78において、報知動作実行部26は、利用者情報記憶部を参照して、報知対象者である利用者に対応する報知動作の種類を特定する。 In the case of the second method, in step S73, the search unit 25 specifies a search location based on a location expression included in the received message. Further, in step S78, the notification operation executing unit 26 specifies the type of the notification operation corresponding to the user who is the notification target person with reference to the user information storage unit.
 第3方式の場合には、ステップS73において、探索部25は、受信したメッセージに含まれる場所表現によって探索場所を特定する。更にステップS78において、報知動作実行部26は、受信したメッセージの読み上げを行う。 In the case of the third method, in step S73, the search unit 25 specifies a search location based on a location expression included in the received message. Further, in step S78, the notification operation execution unit 26 reads out the received message.
 第4方式の場合には、ステップS73において、探索部25は、利用者情報記憶部を参照して、報知対象者である利用者に対応する探索場所を特定する。更にステップS78において、報知動作実行部26は、利用者情報記憶部を参照して、報知対象者である利用者に対応する報知動作の種類を特定する。 In the case of the fourth method, in step S73, the search unit 25 refers to the user information storage unit and specifies a search location corresponding to the user who is the notification target. Further, in step S78, the notification operation executing unit 26 specifies the type of the notification operation corresponding to the user who is the notification target person with reference to the user information storage unit.
 尚、第1方式から第4方式を任意に組み合わせてもよい。例えば、第1方式と第2方式を組み合わせてもよい。第1方式と第3方式を組み合わせてもよい。第1方式と第4方式を組み合わせてもよい。第2方式と第3方式を組み合わせてもよい。第2方式と第4方式を組み合わせてもよい。第3方式と第4方式を組み合わせてもよい。第1方式と第2方式と第3方式を組み合わせてもよい。第1方式と第2方式と第4方式を組み合わせてもよい。第1方式と第3方式と第4方式を組み合わせてもよい。第2方式と第3方式と第4方式を組み合わせてもよい。第1方式と第2方式と第3方式と第4方式を組み合わせてもよい。 The first to fourth methods may be arbitrarily combined. For example, the first method and the second method may be combined. The first method and the third method may be combined. The first method and the fourth method may be combined. The second method and the third method may be combined. The second method and the fourth method may be combined. The third method and the fourth method may be combined. The first method, the second method, and the third method may be combined. The first method, the second method, and the fourth method may be combined. The first method, the third method, and the fourth method may be combined. The second system, the third system, and the fourth system may be combined. The first method, the second method, the third method, and the fourth method may be combined.
 上述した方式の組み合わせに関連して、ステップS73において、探索部25は、受信したメッセージに場所表現が含まれる場合には、場所表現によって探索場所を特定し、受信したメッセージに場所表現が含まれない場合には、利用者情報記憶部を参照して、報知対象者である利用者に対応する探索場所を特定してもよい。ステップS78において、報知動作実行部26は、受信したメッセージの内容が空である場合に、利用者情報記憶部を参照して、報知対象者である利用者に対応する報知動作の種類を特定し、受信したメッセージの内容が空でない場合には、受信したメッセージの読み上げを行うようにしてもよい。 In connection with the combination of the above-described methods, in step S73, when the received message includes the location expression, the search unit 25 specifies the search location by the location expression, and the received message includes the location expression. If not, the search location corresponding to the user to be notified may be specified by referring to the user information storage unit. In step S78, when the content of the received message is empty, the notification operation execution unit 26 refers to the user information storage unit and specifies the type of the notification operation corresponding to the user who is the notification target. If the content of the received message is not empty, the received message may be read aloud.

Claims (25)

  1.  報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得部と、
     前記実行情報取得部において取得された前記実行情報に基づき、前記報知対象者を探索する探索部と、
     前記探索部において探索された前記報知対象者に対して、前記実行情報に基づき前記報知動作を実行する報知動作実行部と
     を備える、ロボット。
    An execution information acquisition unit that acquires execution information for performing a notification operation of information to be performed on a notification target person,
    Based on the execution information acquired in the execution information acquisition unit, a search unit to search for the notification target person,
    A notification operation execution unit that executes the notification operation based on the execution information for the notification target person searched by the search unit.
  2.  前記実行情報取得部は、利用者によって指定された場所に係る場所情報を前記実行情報として取得する、請求項1に記載のロボット。 2. The robot according to claim 1, wherein the execution information acquisition unit acquires location information related to a location designated by a user as the execution information. 3.
  3.  前記実行情報取得部は、前記利用者が操作する利用者端末において表示された地図を前記利用者が操作することによって指定された前記場所情報を前記利用者端末から取得する、請求項2に記載のロボット。 The said execution information acquisition part acquires the said location information designated by the said user operating the map displayed on the user terminal operated by the said user from the said user terminal, The Claim 2 characterized by the above-mentioned. Robot.
  4.  前記探索部は、周囲の空間を撮影した撮影画像にさらに基づき前記報知対象者を探索する、請求項1から3のいずれか一項に記載のロボット。 4. The robot according to claim 1, wherein the search unit searches for the notification target person further based on a captured image of a surrounding space. 5.
  5.  前記探索部は、前記撮影画像に含まれる人物を認識することにより前記報知対象者を探索する、請求項4に記載のロボット。 The robot according to claim 4, wherein the search unit searches for the notification target person by recognizing a person included in the captured image.
  6.  移動機構を制御する移動制御部さらに備え、
     前記探索部は、前記実行情報に基づき前記移動機構による移動経路を算出し、
     前記移動制御部は、前記探索部において算出された移動経路に基づき前記移動機構を制御する、請求項1から5のいずれか一項に記載のロボット。
    A movement control unit that controls the movement mechanism,
    The search unit calculates a moving route by the moving mechanism based on the execution information,
    The robot according to claim 1, wherein the movement control unit controls the movement mechanism based on a movement path calculated by the search unit.
  7.  前記探索部は、前記移動機構による移動を制限するための制限情報にさらに基づき前記移動経路を算出する、請求項6に記載のロボット。 The robot according to claim 6, wherein the search unit calculates the movement path further based on restriction information for restricting movement by the moving mechanism.
  8.  周囲の空間を撮影した撮影画像に含まれる所定のマーカを認識するマーカ認識部をさらに備え、
     前記移動制御部は、前記マーカ認識部において認識された前記マーカに基づき前記移動機構を制御する、請求項6または7に記載のロボット。
    The image capturing apparatus further includes a marker recognition unit that recognizes a predetermined marker included in a captured image of a surrounding space,
    The robot according to claim 6, wherein the movement control unit controls the movement mechanism based on the marker recognized by the marker recognition unit.
  9.  前記探索部において探索された前記報知対象者の状態に係る状態情報を取得する状態情報取得部をさらに備え、
     前記報知動作実行部は、前記状態情報取得部において取得された前記状態に応じて前記報知動作を変更する、請求項1から8のいずれか一項に記載のロボット。
    The information processing apparatus further includes a state information acquisition unit that acquires state information relating to a state of the notification target person searched in the search unit,
    The robot according to any one of claims 1 to 8, wherein the notification operation execution unit changes the notification operation according to the state acquired by the state information acquisition unit.
  10.  前記状態情報取得部は、前記報知対象者が睡眠中であるかまたは起床中であるかの前記状態を取得し、
     前記報知動作実行部は、前記報知動作として、
     前記状態が睡眠中である場合には前記報知対象者を目覚めさせる目覚動作を実行し、
     前記状態が起床中である場合には前記報知対象者に対する挨拶動作を実行する、請求項9に記載のロボット。
    The state information acquisition unit acquires the state whether the notification target person is sleeping or waking up,
    The notification operation execution unit, as the notification operation,
    When the state is sleeping, execute a wake-up operation to wake up the notification target person,
    The robot according to claim 9, wherein when the state is waking up, the robot performs a greeting operation for the notification target person.
  11.  前記実行情報取得部は、前記報知対象者に関連付けられた前記実行情報を取得し、
     前記報知動作実行部は、探索された前記報知対象者に関連付けられた前記報知動作を実行する、請求項1から10のいずれか一項に記載のロボット。
    The execution information acquisition unit acquires the execution information associated with the notification target person,
    The robot according to any one of claims 1 to 10, wherein the notifying operation executing unit executes the notifying operation associated with the searched notification target person.
  12.  前記実行情報取得部は、前記報知対象者と前記ロボットとの親密度の高低を示す親密度情報を前記実行情報として取得し、
     前記報知動作実行部は、前記実行情報取得部において取得された前記親密度情報における親密度が所定の親密度条件を充足する場合に、他のロボットと協働して前記報知動作を実行する、請求項1から11のいずれか一項に記載のロボット。
    The execution information acquisition unit acquires intimacy information indicating the level of intimacy between the notification target person and the robot as the execution information,
    The informing operation executing unit, when the intimacy in the intimacy information acquired by the execution information acquiring unit satisfies a predetermined intimacy condition, executes the informing operation in cooperation with another robot, The robot according to any one of claims 1 to 11.
  13.  前記実行情報取得部は、複数の前記報知対象者に関連付けられた前記実行情報を取得し、
     前記報知動作実行部は、複数の前記報知対象者に対してそれぞれ関連付けられた前記報知動作を平行して実行する、請求項1から12のいずれか一項に記載のロボット。
    The execution information acquisition unit acquires the execution information associated with a plurality of the notification target persons,
    The robot according to any one of claims 1 to 12, wherein the notification operation execution unit executes the notification operations associated with the plurality of notification target persons in parallel.
  14.  前記実行情報取得部は、前記報知動作を実行する時間に係る時間情報を前記実行情報として取得し、
     前記報知動作実行部は、前記時間情報に基づき前記報知動作を実行する、請求項1から13のいずれか一項に記載のロボット。
    The execution information obtaining unit obtains time information relating to a time for executing the notification operation as the execution information,
    The robot according to any one of claims 1 to 13, wherein the notification operation execution unit executes the notification operation based on the time information.
  15.  マイクに入力された音声を認識して言語データに変換する音声認識部をさらに備え、
     前記実行情報取得部は、変換された前記言語データによって前記実行情報を特定する、請求項1から14のいずれか一項に記載のロボット。
    It further includes a voice recognition unit that recognizes voice input to the microphone and converts it into language data,
    The robot according to claim 1, wherein the execution information acquisition unit specifies the execution information based on the converted language data.
  16.  前記報知動作実行部は、取得した前記実行情報に前記報知動作を実行する時間に係る時間情報が含まれない場合に、前記実行情報を取得した時点から所定時間が経過した時点で、前記報知動作を実行する請求項15に記載のロボット。 The notifying operation executing unit, when the acquired execution information does not include time information related to a time for executing the notifying operation, the notifying operation is performed at a point in time when a predetermined time has elapsed from the point in time when the executing information was obtained. The robot according to claim 15, wherein the robot performs:
  17.  前記実行情報取得部は、前記実行情報に場所情報が含まれない場合に、前記音声を入力した前記マイクの所在場所を場所情報とする請求項15または16に記載のロボット。 17. The robot according to claim 15, wherein, when the execution information does not include location information, the execution information acquisition unit sets the location of the microphone to which the voice is input as location information.
  18.  前記報知動作を実行した後に、前記報知対象者を撮影する撮影部と、
     撮影した前記報知対象者の画像データを利用者端末へ送信する送信部とをさらに備える請求項1から17のいずれか一項に記載のロボット。
    After performing the notification operation, a photographing unit that photographs the notification target person,
    The robot according to any one of claims 1 to 17, further comprising: a transmission unit configured to transmit image data of the notification target person to the user terminal.
  19.  前記撮影部は、前記報知対象者と前記ロボットとの親密度に応じて、前記報知対象者を撮影する条件を変更する請求項18に記載のロボット。 19. The robot according to claim 18, wherein the photographing unit changes a condition for photographing the notification target person according to a degree of closeness between the notification target person and the robot.
  20.  前記報知動作を実行した後に、前記報知対象者の状態に係る状態情報を取得する状態情報取得部と、
     取得した前記状態情報を利用者端末へ送信する送信部とをさらに備える請求項1から19のいずれか一項に記載のロボット。
    After performing the notification operation, a state information acquisition unit that acquires state information related to the state of the notification target person,
    The robot according to any one of claims 1 to 19, further comprising: a transmission unit configured to transmit the obtained state information to a user terminal.
  21.  ロボットの利用者宛のメッセージを受信するメッセージ受信部と、
     受信した前記メッセージの宛先によって報知対象者を特定する報知対象者特定部と、
     前記メッセージによって指定された場所又は前記報知対象者に対応して特定される場所において、前記報知対象者を探索する探索部と、
     前記探索部において探索された前記報知対象者に対して、前記メッセージの読み上げまたは前記メッセージで指示された報知動作を行う報知部と
     を備える、ロボット。
    A message receiving unit for receiving a message addressed to the robot user;
    A notification target person specifying unit that specifies a notification target person by a destination of the received message,
    At a place specified by the message or a place specified in correspondence with the notification target person, a search unit that searches for the notification target person,
    A notification unit that reads out the message or performs a notification operation instructed by the message with respect to the notification target person searched by the search unit.
  22.  ロボットにおいて、
     報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得ステップと、
     前記実行情報取得ステップにおいて取得された前記実行情報に基づき、前記報知対象者を探索する探索ステップと、
     前記探索ステップにおいて探索された前記報知対象者に対して、前記実行情報に基づき前記報知動作を実行する報知動作実行ステップと
     を含む、ロボット制御方法。
    In the robot,
    An execution information obtaining step of obtaining execution information for executing a notification operation of information to be performed on a notification target person,
    A search step of searching for the notification target person based on the execution information acquired in the execution information acquisition step,
    A notification operation execution step of executing the notification operation based on the execution information for the notification target person searched in the search step.
  23.  ロボットに、
     報知対象者に対して実行すべき情報の報知動作を実行するための実行情報を取得する実行情報取得処理と、
     前記実行情報取得処理において取得された前記実行情報に基づき、前記報知対象者を探索する探索処理と、
     前記探索処理において探索された前記報知対象者に対して、前記実行情報に基づき前記報知動作を実行する報知動作実行処理と
     を実行させるための、ロボット制御プログラム。
    To the robot,
    Execution information acquisition processing for acquiring execution information for executing a notification operation of information to be performed on a notification target person;
    Based on the execution information acquired in the execution information acquisition process, a search process for searching for the notification target person,
    A robot control program for causing the notification target person searched in the search process to execute a notification operation execution process of executing the notification operation based on the execution information.
  24.  ロボットにおいて、
     前記ロボットの利用者宛のメッセージを受信するメッセージ受信ステップと、
     受信した前記メッセージの宛先によって報知対象者を特定する報知対象者特定ステップと、
     前記メッセージによって指定された場所又は前記報知対象者に対応して特定される場所において、前記報知対象者を探索する探索ステップと、
     前記探索ステップにおいて探索された前記報知対象者に対して、前記メッセージの読み上げまたは前記メッセージで指示された報知動作を行う報知ステップと
     を含む、ロボット制御方法。
    In the robot,
    A message receiving step of receiving a message addressed to a user of the robot;
    A notification target person specifying step of specifying a notification target person by a destination of the received message,
    At a place specified by the message or at a place specified corresponding to the notification target, a search step of searching for the notification target,
    A notification step of reading out the message or performing a notification operation instructed by the message to the notification target person searched in the search step.
  25.  ロボットに、
     前記ロボットの利用者宛のメッセージを受信するメッセージ受信処理と、
     受信した前記メッセージの宛先によって報知対象者を特定する報知対象者特定処理と、
     前記メッセージによって指定された場所又は前記報知対象者に対応して特定される場所において、前記報知対象者を探索する探索処理と、
     前記探索処理において探索された前記報知対象者に対して、前記メッセージの読み上げまたは前記メッセージで指示された報知動作を行う報知処理と
     を実行させるための、ロボット制御プログラム。
    To the robot,
    A message receiving process for receiving a message addressed to a user of the robot;
    Notification target person specifying processing for specifying a notification target person by the destination of the received message,
    At a place specified by the message or at a place specified corresponding to the notification target person, a search process of searching for the notification target person,
    A robot control program for causing the notification target person searched in the search process to execute a reading process of reading out the message or a notification operation instructed by the message.
PCT/JP2019/028975 2018-07-26 2019-07-24 Robot, method for controlling robot, and control program WO2020022371A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020532436A JPWO2020022371A1 (en) 2018-07-26 2019-07-24 Robots and their control methods and programs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018140781 2018-07-26
JP2018-140781 2018-07-26

Publications (1)

Publication Number Publication Date
WO2020022371A1 true WO2020022371A1 (en) 2020-01-30

Family

ID=69182246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/028975 WO2020022371A1 (en) 2018-07-26 2019-07-24 Robot, method for controlling robot, and control program

Country Status (2)

Country Link
JP (1) JPWO2020022371A1 (en)
WO (1) WO2020022371A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444761A (en) * 2020-02-21 2020-07-24 云知声智能科技股份有限公司 Method and device for improving wake-up rate
WO2023276187A1 (en) * 2021-06-30 2023-01-05 パナソニックIpマネジメント株式会社 Travel map creation device, travel map creation method, and program
JP7491229B2 (en) 2021-02-03 2024-05-28 トヨタ自動車株式会社 AUTONOMOUS MOBILITY SYSTEM, AUTONOMOUS MOBILITY METHOD, AND AUTONOMOUS MOBILITY PROGRAM

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003330539A (en) * 2002-05-13 2003-11-21 Sanyo Electric Co Ltd Autonomous moving robot and autonomous moving method thereof
JP2009266200A (en) * 2008-04-24 2009-11-12 Korea Advanced Inst Of Sci Technol Apparatus and method for forming favorability rating of robot
JP2016120591A (en) * 2005-09-30 2016-07-07 アイロボット コーポレイション Locomotion robot
JP2017501473A (en) * 2013-12-19 2017-01-12 アクチエボラゲット エレクトロルックス Robot vacuum cleaner

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003330539A (en) * 2002-05-13 2003-11-21 Sanyo Electric Co Ltd Autonomous moving robot and autonomous moving method thereof
JP2016120591A (en) * 2005-09-30 2016-07-07 アイロボット コーポレイション Locomotion robot
JP2009266200A (en) * 2008-04-24 2009-11-12 Korea Advanced Inst Of Sci Technol Apparatus and method for forming favorability rating of robot
JP2017501473A (en) * 2013-12-19 2017-01-12 アクチエボラゲット エレクトロルックス Robot vacuum cleaner

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444761A (en) * 2020-02-21 2020-07-24 云知声智能科技股份有限公司 Method and device for improving wake-up rate
CN111444761B (en) * 2020-02-21 2023-05-30 云知声智能科技股份有限公司 Method and device for improving wake-up rate
JP7491229B2 (en) 2021-02-03 2024-05-28 トヨタ自動車株式会社 AUTONOMOUS MOBILITY SYSTEM, AUTONOMOUS MOBILITY METHOD, AND AUTONOMOUS MOBILITY PROGRAM
WO2023276187A1 (en) * 2021-06-30 2023-01-05 パナソニックIpマネジメント株式会社 Travel map creation device, travel map creation method, and program

Also Published As

Publication number Publication date
JPWO2020022371A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
US10628714B2 (en) Entity-tracking computing system
US11010601B2 (en) Intelligent assistant device communicating non-verbal cues
WO2020022371A1 (en) Robot, method for controlling robot, and control program
JP4682217B2 (en) Behavior control apparatus, method, and program
WO2019240208A1 (en) Robot, method for controlling robot, and program
CN114391163A (en) Gesture detection system and method
JP2018120644A (en) Identification apparatus, identification method, and program
US11074451B2 (en) Environment-based application presentation
KR20200024675A (en) Apparatus and method for recognizing behavior of human
CN113165177B (en) Information processing apparatus, method for processing information, and program
WO2019138619A1 (en) Information processing device, information processing method and program
JPWO2018230160A1 (en) Information processing system, information processing method, and program
JP2016038774A (en) Person identification apparatus
JP2005242759A (en) Action/intention presumption system, action/intention presumption method, action/intention pesumption program and computer-readable recording medium with program recorded thereon
JPWO2020116233A1 (en) Information processing equipment, information processing methods, and programs
JP6711799B2 (en) Device control device, device control method, and device control system
CN111033606A (en) Information processing apparatus, information processing method, and program
JP2005199373A (en) Communication device and communication method
US20220134544A1 (en) System and method for continuously sharing behavioral states of a creature
US20220405689A1 (en) Information processing apparatus, information processing method, and program
US11687049B2 (en) Information processing apparatus and non-transitory computer readable medium storing program
CN111919250B (en) Intelligent assistant device for conveying non-language prompt
WO2019188697A1 (en) Autonomous action robot, data supply device, and data supply program
CN115151919A (en) Adaptive learning system using artificial intelligence machines to locate and map users and objects
KR20210064859A (en) Apparatus for providing smart service and method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19840984

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020532436

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19840984

Country of ref document: EP

Kind code of ref document: A1