WO2019188697A1 - Autonomous action robot, data supply device, and data supply program - Google Patents

Autonomous action robot, data supply device, and data supply program Download PDF

Info

Publication number
WO2019188697A1
WO2019188697A1 PCT/JP2019/011814 JP2019011814W WO2019188697A1 WO 2019188697 A1 WO2019188697 A1 WO 2019188697A1 JP 2019011814 W JP2019011814 W JP 2019011814W WO 2019188697 A1 WO2019188697 A1 WO 2019188697A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
robot
spatial
visualization
visualization data
Prior art date
Application number
PCT/JP2019/011814
Other languages
French (fr)
Japanese (ja)
Inventor
要 林
博教 小川
秀哉 南地
Original Assignee
Groove X株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Groove X株式会社 filed Critical Groove X株式会社
Publication of WO2019188697A1 publication Critical patent/WO2019188697A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • the present invention relates to an autonomous behavior robot, a data providing apparatus, and a data providing program.
  • the robot recognizes an element related to a space such as a shape or a position (hereinafter referred to as a “space element”) such as a room, a wall or indoor furniture, a home appliance, a houseplant, or a luggage based on the photographed image.
  • a space element such as a room, a wall or indoor furniture, a home appliance, a houseplant, or a luggage based on the photographed image.
  • the robot may erroneously recognize a spatial element. Due to misrecognition, the robot moves differently from human speculation.
  • the present invention has been made in view of the above circumstances, and in one embodiment, an autonomous behavior robot, a data providing apparatus, and a data providing program capable of confirming a spatial element recognized by a robot are provided.
  • One purpose is to provide.
  • the autonomous behavior robot includes a moving mechanism, a photographing unit that photographs a surrounding space, and the space based on a photographed image photographed by the photographing unit.
  • a spatial data generation unit that generates recognized spatial data
  • a visualization data generation unit that generates visualization data that visualizes spatial elements included in the space, based on the generated spatial data
  • a visualization data providing unit for providing
  • the autonomous behavior robot further includes an instruction acquisition unit that acquires designation of a region included in the provided visualization data, and the spatial data generation unit relates to the obtained designation Spatial data recognizing the space in the region is regenerated.
  • the visualization data generation unit reflects the recognized feature of the spatial element based on the spatial element recognized from the captured image captured by the imaging unit.
  • the visualization data is generated.
  • the visualization data generation unit may be configured to display the visualization data in which a color attribute is given to the space element based on color information of a captured image captured by the imaging unit. Generate.
  • the visualization data generation unit generates the visualization data in which a fixed spatial element is distinguished from a movable spatial element.
  • the visualization data generation unit distinguishes the fixed spatial element and the movable spatial element based on a temporal change of the spatial data.
  • the visualization data is generated.
  • the visualization data generation unit generates the visualization data obtained by visualizing the spatial data included in a predetermined range from a position moved by the moving mechanism.
  • the robot further includes a point cloud data generation unit that generates three-dimensional point cloud data based on a captured image captured by the imaging unit, and the spatial data generation unit includes: The spatial data is generated based on the generated point cloud data.
  • the spatial data generation unit generates the spatial data by specifying an outline of the spatial element in the point cloud data
  • the visualization data generation unit includes: The visualization data in which the spatial element is visualized based on the identified outline is generated.
  • the data providing apparatus generates visualization data in which the spatial elements included in the space are visualized based on the spatial data obtained by the robot recognizing the space.
  • the data providing apparatus includes a designation obtaining unit that obtains designation of an area included in the provided visualization data, and the space in the area related to the designation obtained with respect to the robot. And an instruction unit for instructing recognition.
  • the data providing program is a visualization data obtained by visualizing a spatial element included in the space based on the spatial data obtained by the robot recognizing the space.
  • a visualization data generation function for generating the visualization data and a visualization data provision function for providing the generated visualization data are realized.
  • an autonomous behavior robot a data providing apparatus, and a data providing program that allow a person to confirm a spatial element recognized by the robot.
  • a robot has various sensors such as a camera and a microphone, and recognizes surrounding conditions by comprehensively determining information obtained from these sensors.
  • sensors such as a camera and a microphone
  • the moving route may not be appropriate because the object cannot be recognized correctly. Due to misrecognition, for example, even if a person thinks that there is a sufficiently large space, the robot may recognize that there is an obstacle and that only a narrow range can move.
  • the autonomous behavior robot of the present invention visualizes its recognition state and provides it to the person, and performs recognition processing again on the point indicated by the person. Can be done.
  • FIG. 1 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot 1 according to the embodiment.
  • the autonomous behavior type robot 1 includes a data providing device 10 and a robot 2.
  • the data providing device 10 and the robot 2 are connected by communication and function as an autonomous behavior type robot 1 (system for causing the robot 2 to autonomously act).
  • the robot 2 is a mobile robot having a photographing unit 21 and a moving mechanism 29.
  • the data providing apparatus 10 has functions of a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, a photographing target recognition unit 15, and a second communication control unit 16.
  • the first communication control unit 11 has functions of a captured image acquisition unit 111, a spatial data providing unit 112, and an instruction unit 113.
  • the second communication control unit 16 has the functions of the visualization data providing unit 161 and the designation acquiring unit 162.
  • Each function of the data providing device 10 of the autonomous behavior robot 1 in the present embodiment will be described as a functional module realized by a data providing program (software) that controls the data providing device 10.
  • the data providing apparatus 10 is an apparatus that can execute a part of the functions of the autonomous behavior robot 1.
  • the data providing apparatus 10 is installed in a place physically close to the robot 2, communicates with the robot 2, and This is an edge server that distributes the processing load.
  • the autonomous behavior robot 1 will be described as being configured by the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. Also good.
  • the robot 2 is a robot that can move based on the spatial data, and is a mode of the robot in which the movement range is determined based on the spatial data.
  • the data providing apparatus 10 may be configured with one casing or may be configured with a plurality of casings.
  • the first communication control unit 11 controls a communication function with the robot 2.
  • the communication method with the robot 2 is arbitrary, and for example, wireless LAN (Local Area Network), Bluetooth (registered trademark), near field communication such as infrared communication, wired communication, or the like can be used.
  • Each function of the captured image acquisition unit 111, the spatial data providing unit 112, and the instruction unit 113 included in the first communication control unit 11 communicates with the robot 2 using a communication function controlled by the first communication control unit 11.
  • the captured image acquisition unit 111 acquires a captured image captured by the imaging unit 21 of the robot 2.
  • the imaging unit 21 is provided in the robot 2 and can change the imaging range as the robot 2 moves.
  • the photographing unit 21 can be composed of one or a plurality of cameras.
  • the photographing unit 21 can three-dimensionally photograph a spatial element that is a photographing target from different photographing angles.
  • the imaging unit 21 is a video camera using an image sensor such as a CCD (Charge-Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor.
  • the shape of the spatial element can be measured by photographing the spatial element with two cameras (stereo cameras).
  • the photographing unit 21 may be a camera using ToF (Time of Flight) technology.
  • ToF Time of Flight
  • the shape of the spatial element can be measured by irradiating the spatial element with modulated infrared light and measuring the distance to the spatial element.
  • the photographing unit 21 may be a camera using a structured light.
  • a structured light is a light that projects light in a stripe or lattice pattern onto a spatial element.
  • the imaging unit 21 can measure the shape of the spatial element from the distortion of the projected pattern by imaging the spatial element from a different angle from the structured light.
  • the imaging unit 21 may be any one of these cameras or a combination of two or more.
  • the photographing unit 21 is attached to the robot 2 and moves in accordance with the movement of the robot 2.
  • the photographing unit 21 may be installed separately from the robot 2.
  • the captured image captured by the capturing unit 21 is provided to the captured image acquisition unit 111 in a communication method corresponding to the first communication control unit 11.
  • the captured image is temporarily stored in the storage unit of the robot 2, and the captured image acquisition unit 111 acquires the captured image temporarily stored in real time or at a predetermined communication interval.
  • the spatial data providing unit 112 provides the spatial data generated by the spatial data generating unit 13 to the robot 2.
  • Spatial data is data obtained by converting spatial elements recognized by the robot in the space where the robot 2 exists.
  • Spatial data is expressed as, for example, a combination of map information indicating a room layout and spatial element information indicating the shape and arrangement of a spatial element (object) such as furniture.
  • the robot 2 can move within a range determined by the spatial data. That is, the spatial data functions as a map for determining the movable range in the robot 2.
  • the robot 2 is provided with spatial data from the spatial data providing unit 112.
  • the spatial data can include position data of spatial elements such as walls, furniture, electrical appliances, and steps that the robot 2 cannot move.
  • the robot 2 can determine whether or not the robot 2 can move based on the provided spatial data. Further, the robot 2 may be able to recognize whether or not an ungenerated range is included in the spatial data. Whether or not an ungenerated range is included can be determined, for example, based on whether or not a space having no spatial element is included in part of the spatial data.
  • the instruction unit 113 instructs the robot 2 to shoot based on the spatial data generated by the spatial data generation unit 13. Since the spatial data generation unit 13 creates spatial data based on the captured image acquired by the captured image acquisition unit 111, for example, when creating indoor spatial data, spatial data is not generated for a portion that has not been captured. May be included. Further, if the captured image is unclear, noise may be included in the created spatial data, and an inaccurate part may be included in the spatial data. When there is an ungenerated part in the spatial data, the instructing unit 113 may issue an imaging instruction for the ungenerated part. In addition, when the spatial data includes an inaccurate portion, the instructing unit 113 may instruct the imaging for the inaccurate portion. The instruction unit 113 may voluntarily instruct photographing based on the spatial data.
  • the instruction unit 113 may instruct photographing based on an explicit instruction from a user who has confirmed visualization data (described later) generated based on the spatial data.
  • the user can recognize the space and generate the spatial data by designating the area included in the visualization data and instructing the robot 2 to perform photographing.
  • the point cloud data generation unit 12 generates three-dimensional point cloud data of spatial elements based on the captured image acquired by the captured image acquisition unit 111.
  • the point cloud data generation unit 12 generates point cloud data by converting a spatial element included in the captured image into a three-dimensional set of points in a predetermined space.
  • the spatial elements are room walls, steps, doors, furniture placed in the room, home appliances, luggage, houseplants, and the like. Since the point cloud data generation unit 12 generates point cloud data based on the captured image of the spatial element, the point cloud data represents the shape of the surface of the captured spatial element.
  • the photographed image is generated by the photographing unit 21 of the robot 2 photographing at a predetermined photographing angle at a predetermined photographing position.
  • the spatial data generation unit 13 generates spatial data that determines the movable range of the robot 2 based on the point cloud data of the spatial elements generated by the point cloud data generation unit 12. Since the spatial data is generated based on the point cloud data in the space, the spatial element included in the spatial data also has three-dimensional coordinate information.
  • the coordinate information may include point position, length (including height), area, or volume information.
  • the robot 2 can determine a movable range based on the position information of the spatial elements included in the generated spatial data. For example, when the robot 2 has a moving mechanism 29 that horizontally moves on the floor surface, the robot 2 has a level difference from the floor surface that is a spatial element in the spatial data at a predetermined height or higher (for example, 1 cm or higher).
  • the robot 2 determines, in the spatial data, a range in which a space between the wall and the furniture, which is a spatial element, is a predetermined width or more (for example, 40 cm or more) as a movable range in consideration of a clearance with its own width. To do.
  • the spatial data generation unit 13 may set attribute information for a predetermined area in the space.
  • the attribute information is information that defines the movement condition of the robot 2 for a predetermined area.
  • the movement condition is, for example, a condition that defines a clearance from a space element that the robot 2 can move.
  • attribute information in which the clearance for a predetermined area is 5 cm or more can be set.
  • information for restricting the movement of the robot may be set.
  • the movement restriction is, for example, movement speed restriction or entry prohibition.
  • attribute information that reduces the moving speed of the robot 2 may be set in an area where the clearance is small or an area where people exist.
  • the movement condition set in the attribute information may be determined by the floor material of the area.
  • the attribute information may be set to change the operation (traveling speed or traveling means) of the moving mechanism 29 when the floor is a cushion floor, flooring, tatami mat, or carpet.
  • the above conditions may be set at the charging spot where the robot 2 can move and charge, the step where the movement of the robot 2 is unstable and the movement is restricted, or the end of the carpet.
  • the area in which the attribute information is set may be understood by the user, for example, by changing the display method in the visualization data described later.
  • the spatial data generation unit 13 performs, for example, a Hough transform on the point cloud data generated by the point cloud data generation unit 12 to extract a common line, curve, or other graphic in the point cloud data.
  • Spatial data is generated by the contour of the spatial element to be expressed.
  • the Hough transform is a coordinate transformation method for extracting a figure that passes through the feature points most when the point cloud data is a feature point. Since the point cloud data expresses the shape of a spatial element such as furniture placed in a room in the point cloud, the user can determine what the spatial element is represented by the point cloud data (for example, Recognition of tables, chairs, walls, etc.) may be difficult.
  • the spatial data generation unit 13 can express the outline of furniture or the like by performing the Hough transform on the point cloud data, the user can easily determine the spatial elements.
  • the spatial data generation unit 13 converts the point cloud data generated by the point cloud data generation unit 12 into a basic shape in a spatial element (for example, a table, a chair, a wall, etc.) recognized in the image recognition. Data may be generated.
  • a spatial element such as a table is a table by image recognition
  • the shape of the table is determined from a part of point cloud data of the spatial element (for example, point cloud data when the table is viewed from the front). It can be predicted accurately.
  • the spatial data generation unit 13 can generate spatial data that accurately grasps the spatial elements by combining point cloud data and image recognition.
  • the basic shape may be a predetermined shape (for example, a square, a circle, a triangle, a rectangular parallelepiped, a cone, or the like) associated with a spatial element.
  • the spatial data generation unit 13 may generate spatial data by deforming the basic shape by enlarging or reducing the basic shape.
  • the shape of the spatial element may be expressed by combining plural or plural types of basic shape objects.
  • the visualization data generation unit 14 to be described later visualizes the feature of the spatial element by synthesizing the texture associated with the spatial element with the spatial data after processing such as the basic shape or deformation. Data may be generated.
  • the spatial data generation unit 13 generates spatial data based on point cloud data included in a predetermined range from the position where the robot 2 has moved.
  • the predetermined range from the position to which the robot 2 has moved includes the position where the robot 2 has actually moved, and may be, for example, a range at a distance such as 30 cm from the position to which the robot 2 has moved. Since the point cloud data is generated based on the captured image captured by the capturing unit 21 of the robot 2, the captured image may include a spatial element at a position away from the robot 2. When the imaging unit 21 is far away from the space element, there may be a portion where the robot 2 is not moved due to the presence of an uncaptured portion or the presence of an obstacle that is not captured.
  • the spatial data generation unit 13 may generate spatial data that does not include a spatial element with low accuracy or a distorted spatial element by ignoring feature points that are greatly separated by a predetermined distance or more.
  • the spatial data generation unit 13 deletes point cloud data outside a predetermined range from the position where the robot 2 has moved to generate spatial data, thereby preventing the occurrence of an enclave where no data actually exists.
  • the visualization data generation unit 14 generates visualization data that is visualized based on the spatial data generated by the spatial data generation unit 13 so that a person can intuitively determine the spatial elements included in the space.
  • the spatial data is data including the spatial elements recognized by the autonomous behavior robot 1
  • the visualization data is the spatial data recognized by the autonomous behavior robot 1 by the user. This is data for visual recognition.
  • Spatial data may include misrecognized spatial elements. By visualizing the spatial data, it becomes easy for a person to confirm the recognition state (presence / absence of misrecognition) of the spatial element in the autonomous behavior robot 1.
  • Visualized data is data that can be displayed on the display device.
  • the visualization data is a so-called floor plan, and a spatial element recognized as a table, chair, sofa, or the like is included in an area surrounded by a spatial element recognized as a wall.
  • the visualization data generation unit 14 generates the shape of furniture or the like formed in the graphic extracted by the Hough transform as visualization data expressed by RGB data, for example.
  • the spatial data generation unit 13 generates visualization data in which the plane drawing method is changed based on the direction of the plane in three dimensions of the spatial element.
  • the direction of the three-dimensional plane of the spatial element is, for example, the normal direction of the plane formed by the figure generated in the point cloud data by Hough transforming the point cloud data generated in the point cloud data generation unit 12 It is.
  • the visualization data generation unit 14 generates visualization data in which the plane drawing method is changed according to the normal direction.
  • the drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like.
  • the visualization data generation unit 14 when the normal line of the plane is the vertical direction (the plane is the horizontal direction), the visualization data generation unit 14 renders the plane with high brightness and draws in a bright color.
  • the visualization data generation unit 14 renders the plane with low brightness and draws in a dark color.
  • the visualization data may include coordinate information (referred to as “visualization coordinate information”) in the visualization data associated with the coordinate information of each spatial element included in the spatial data. Since the visualization coordinate information is associated with the coordinate information, the point in the visualization coordinate information corresponds to the point in the actual space, and the surface in the visualization coordinate information corresponds to the surface in the actual space. Therefore, when the user specifies the position of a certain point in the visualization data, the position of the point in the actual room corresponding to the point can be specified.
  • a conversion function for converting the coordinate system may be prepared so that the coordinate system in the visualization data and the coordinate system in the spatial data can be mutually converted.
  • the coordinate system in the visualization data and the coordinate system in the actual space may be mutually convertible.
  • the visualization data generation unit 14 generates visualization data as stereoscopic (3D (Dimensions)) data.
  • the visualization data generation unit 14 may generate the visualization data as planar (2D) data.
  • the visualization data generation unit 14 may generate the visualization data in 3D when the spatial data generation unit 13 generates sufficient data to generate the visualization data in 3D.
  • the visualization data generation unit 14 may generate the visualization data in 3D based on the 3D viewpoint position (viewpoint height, viewpoint elevation angle, etc.) designated by the user. By making it possible to specify the viewpoint position, the user can easily check the shape of furniture or the like.
  • the visualization data generation unit 14 may generate visualization data in which the wall or ceiling of the room is colored only for the back wall and the front wall or ceiling is transparent (not colored). By making the near wall transparent, the user can easily confirm the shape of the furniture or the like arranged at the end (inside the room) of the near wall.
  • the visualization data generation unit 14 generates visualization data to which a color attribute corresponding to the captured image acquired by the captured image acquisition unit 111 is added. For example, when the captured image includes woodgrain furniture and the color of the wood (for example, brown) is detected, the visualization data generation unit 14 gives a color approximate to the detected color to the extracted furniture figure. Generate visualization data. By assigning a color attribute according to the photographed image, the user can easily check the type of furniture or the like.
  • the visualization data generation unit 14 generates visualization data in which the drawing method between the fixed object that is fixed and the moving object that moves is changed.
  • the fixed object is, for example, a wall of a room, a step, furniture that is fixed, and the like.
  • the moving object is, for example, a chair, a trash can, furniture with casters, or the like.
  • the moving object may include a temporary object temporarily placed on the floor, such as a luggage or a bag.
  • the drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like.
  • the classification of fixed, moving or temporary items can be identified by the time period existing at the location.
  • the spatial data generation unit 13 identifies the classification of the fixed object, the moving object, or the temporary object based on the time-dependent change of the point cloud data generated by the point cloud data generation unit 12, and obtains the spatial data. Generate.
  • the spatial data generation unit 13 determines that the spatial element is a fixed object when the spatial element has not changed from the difference between the spatial data generated at the first time and the spatial data generated at the second time. to decide. Further, the spatial data generation unit 13 may determine that the spatial element is a moving object when the position of the spatial element changes from the difference of the spatial data.
  • the spatial data generation unit 13 may determine that the spatial element is a temporary object when the spatial element disappears or appears from the difference of the spatial data.
  • the visualization data generation unit 14 changes the drawing method based on the classification identified by the spatial data generation unit 13.
  • the drawing method change includes, for example, color coding, addition of hatching, addition of a predetermined mark, and the like.
  • the spatial data generation unit 13 may display a fixed object in black, a moving object in blue, or a temporary object in yellow.
  • the spatial data generation unit 13 generates spatial data by identifying a classification of a fixed object, a moving object, or a temporary object.
  • the visualization data generation unit 14 may generate visualization data in which the drawing method is changed based on the classification identified by the spatial data generation unit 13. Further, the spatial data generation unit 13 may generate visualization data obtained by changing the drawing method of the spatial element recognized by the image recognition.
  • the visualization data generation unit 14 can generate visualization data in a plurality of divided areas. For example, the visualization data generation unit 14 generates visualization data for each of the spaces partitioned by walls such as a living room, a bedroom, a dining room, and a hallway as one room. By generating visualization data for each room, for example, generation of spatial data or visualization data can be performed separately for each room, and generation of spatial data or the like is facilitated. In addition, it is possible to create spatial data or the like only for an area where the robot 2 may move.
  • the visualization data providing unit 161 provides visualization data that allows the user to select an area. For example, the visualization data providing unit 161 may enlarge the visualization data of the area selected by the user or provide detailed visualization data of the area selected by the user.
  • the imaging target recognition unit 15 recognizes the spatial element based on the captured image acquired by the captured image acquisition unit 111.
  • Spatial element recognition can be performed by using an image recognition engine that determines what a spatial element is based on, for example, image recognition results accumulated in machine learning.
  • the spatial element image recognition can be recognized, for example, in the shape, color, pattern, or the like of the spatial element.
  • the imaging target recognition unit 15 may be able to recognize a spatial element by using an image recognition service provided in a cloud server (not shown).
  • the visualization data generation unit 14 generates visualization data in which the drawing method is changed in accordance with the spatial element recognized by the imaging target recognition unit 15.
  • the visualization data generation unit 14 when the image-recognized space element is a sofa, the visualization data generation unit 14 generates visualization data in which a texture having a cloth texture is added to the space element.
  • the visualization data generation unit 14 may generate visualization data with a wallpaper color attribute (for example, white).
  • the second communication control unit 16 controls communication with the user terminal 3 owned by the user.
  • the user terminal 3 is, for example, a smartphone, a tablet PC, a notebook PC, a desktop PC, or the like.
  • the communication method with the user terminal 3 is arbitrary, and for example, wireless LAN, Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used.
  • Each function of the visualization data providing unit 161 and the designation acquiring unit 162 included in the second communication control unit 16 communicates with the user terminal 3 using a communication function controlled by the second communication control unit 16.
  • the visualization data providing unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
  • the visualization data providing unit 161 is, for example, a Web server, and provides visualization data as a Web page to the browser of the user terminal 3.
  • the visualization data providing unit 161 may provide visualization data to a plurality of user terminals 3. By visually recognizing the visualization data displayed on the user terminal 3, the user can confirm the range in which the robot 2 can move as a 2D or 3D display. In the visualization data, the shape of furniture or the like is drawn by a predetermined drawing method. By operating the user terminal 3, the user can switch between 2D display and 3D display, zoom in or out of the visualization data, or move the viewpoint in 3D display, for example.
  • the user can visually check the visualization data displayed on the user terminal 3 and can confirm the generation state of the spatial data and the attribute information of the area.
  • the user can instruct the creation of the spatial data by designating the area in which the spatial data is not generated from the visualization data via the user terminal 3.
  • An area can be designated via the user terminal 3 to instruct the regeneration of the spatial data.
  • the visualization coordinate information in the visualization data is associated with the coordinate information in the spatial data, the area in the visualization data that is specified to be regenerated by the user can be uniquely identified as the area in the spatial data. .
  • the visualization data generating unit 14 regenerates the visualization data and provides it from the visualization data providing unit 161.
  • the visualization data can be regenerated based on the spatial data by converting the spatial data into corresponding visualization data according to a specified rule.
  • the generation state of the spatial data may not change, for example, the spatial element may be misrecognized in the regenerated visualization data.
  • the user may instruct the generation of spatial data by giving an instruction to change the operation parameter of the robot 2 via the user terminal 3.
  • the operation parameters are, for example, shooting conditions (exposure amount, shutter speed, etc.) in the shooting unit 21 in the robot 2, sensitivity of a sensor (not shown), clearance conditions when allowing the robot 2 to move, and the like.
  • the operation parameter may be included in the spatial data as area attribute information.
  • the visualization data generation unit 14 generates visualization data including a display of a button for instructing creation of spatial data (including “re-creation”), for example.
  • the user terminal 3 can transmit an instruction to create spatial data to the autonomous behavior robot 1 by the user operating the displayed button.
  • the designation acquisition unit 162 acquires the spatial data creation instruction transmitted from the user terminal 3.
  • the designation obtaining unit 162 obtains an instruction to create spatial data of an area designated by the user based on the visualization data provided by the visualization data providing unit 161.
  • the designation acquisition unit 162 may acquire an instruction to set (including change) the attribute information of the area.
  • the designation acquisition unit 162 acquires the position of the area and the direction when the robot approaches the area, that is, the direction to be photographed. Acquisition of a creation instruction can be executed, for example, in the operation of a Web page provided by the visualization data providing unit 161. Thereby, the user can grasp how the robot 2 recognizes the space, and can instruct the robot 2 to redo the recognition process according to the recognition state.
  • the instruction unit 113 instructs the robot 2 to shoot in the area where the creation of spatial data is instructed.
  • the shooting in the area instructed to create the spatial data may include, for example, shooting conditions such as the coordinate position of the robot 2 (shooting unit 21), the shooting direction of the shooting unit 21, and the resolution.
  • the spatial data generation unit 13 adds the newly created spatial data to the existing spatial data when the spatial data for which creation is instructed relates to an ungenerated region, and the spatial data generation unit 13 If the instructed spatial data relates to re-creation, spatial data obtained by updating the existing spatial data is generated.
  • FIG. 1 illustrates the case where the autonomous behavior robot 1 includes the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. It may be a thing.
  • the robot 2 may include all the functions of the data providing apparatus 10.
  • the data providing apparatus 10 may temporarily substitute a function when the robot 2 has insufficient processing capability.
  • “acquisition” may be acquired by the acquiring entity actively, or may be acquired passively by the acquiring entity.
  • the designation acquisition unit 162 may acquire the designation by receiving a spatial data creation instruction transmitted from the user terminal 3 by the user, and the user stores the data in a storage area (not shown) that is not shown.
  • An instruction to create the spatial data may be obtained by reading from the storage area.
  • the autonomous behavior robot 1 includes a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, a second communication control unit 16, and imaging.
  • the functional units of the image acquisition unit 111, the spatial data provision unit 112, the instruction unit 113, the visualization data provision unit 161, and the designation acquisition unit 162 are examples of functions of the autonomous behavior robot 1 in the present embodiment.
  • the functions of the autonomous behavior robot 1 are not limited.
  • the autonomous behavior type robot 1 does not have to have all the functional units described above, and may have a part of functional units.
  • the autonomous behavior type robot 1 may have a function part other than the above.
  • the above-described functional units included in the autonomous behavior robot 1 have been described as being realized by software as described above. However, at least one of the functions of the autonomous behavior robot 1 may be realized by hardware.
  • any one of the above functions that the autonomous behavior robot 1 has may be implemented by dividing one function into a plurality of functions. Further, any two or more functions of the autonomous behavior robot 1 may be integrated into one function. That is, FIG. 1 represents the functions of the autonomous behavior robot 1 by function blocks, and does not indicate that each function is configured by a separate program file, for example.
  • the autonomous behavior robot 1 may be a device realized by a single housing or a system realized by a plurality of devices connected via a network or the like.
  • the autonomous behavior robot 1 may be realized by a virtual device such as a cloud service provided by a cloud computing system, part or all of its functions. That is, the autonomous behavior robot 1 may realize at least one or more of the above functions in another device.
  • the autonomous behavior robot 1 may be a general-purpose computer such as a tablet PC, or may be a dedicated device with limited functions such as a car navigation device.
  • the autonomous behavior type robot 1 may realize part or all of the functions in the robot 2 or the user terminal 3.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot 1 according to the embodiment.
  • the autonomous behavior type robot 1 includes a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, a touch panel 104, and a communication I / F (Interface) 105.
  • the autonomous behavior type robot 1 is a device that executes the autonomous behavior type robot control program described in FIG.
  • the CPU 101 controls the autonomous behavior robot 1 by executing the autonomous behavior robot control program stored in the RAM 102 or the ROM 103.
  • the autonomous behavior robot control program is acquired from, for example, a recording medium that records the autonomous behavior robot control program or a program distribution server via a network, installed in the ROM 103, read from the CPU 101, and executed.
  • the touch panel 104 has an operation input function and a display function (operation display function).
  • the touch panel 104 enables operation input using a fingertip or a touch pen to the user of the autonomous behavior robot 1.
  • the autonomous behavior robot 1 in this embodiment will be described using a touch panel 104 having an operation display function.
  • the autonomous behavior robot 1 has a display device having a display function and an operation input device having an operation input function separately. You may have.
  • the display screen of the touch panel 104 can be implemented as a display screen of the display device, and the operation of the touch panel 104 can be implemented as an operation of the operation input device.
  • the touch panel 104 may be realized in various forms such as a head mount type, a glasses type, and a wristwatch type display.
  • the communication I / F 105 is a communication I / F.
  • the communication I / F 105 executes short-range wireless communication such as a wireless LAN, a wired LAN, and infrared rays. Although only the communication I / F 105 is illustrated in FIG. 2 as the communication I / F, the autonomous behavior robot 1 may have each communication I / F in a plurality of communication methods.
  • FIG. 3 is a flowchart illustrating an example of the operation of the robot control program in the embodiment.
  • the execution subject of the operation is the autonomous behavior type robot 1, but each operation is executed in each function of the autonomous behavior type robot 1 described above.
  • the autonomous behavior robot 1 determines whether a captured image has been acquired (step S11). Whether or not a captured image has been acquired can be determined by whether or not the captured image acquisition unit 111 has acquired a captured image from the robot 2. The determination as to whether or not a captured image has been acquired is made on a per-process basis for captured images. For example, since the moving image is continuously transmitted from the robot 2 when the captured image is a moving image, the determination as to whether or not the captured image has been acquired is based on whether the number of frames or the data amount of the acquired moving image is a predetermined value It can be done by whether or not.
  • the acquired captured image may be acquired by the mobile robot as a main component for transmitting the captured image or may be acquired by the captured image acquisition unit 111 as the main component for taking the captured image from the mobile robot.
  • the autonomous behavior robot 1 repeats the process of step S11 and waits for a captured image to be acquired.
  • the autonomous behavior robot 1 when it is determined that a captured image has been acquired (step S12: NO), the autonomous behavior robot 1 generates point cloud data (step S12).
  • the point cloud data generation unit 12 detects, for example, a point where the luminance change in the photographed image is larger than a predetermined luminance change threshold as a feature point, and the detected feature point is three-dimensionally generated.
  • the feature point may be detected by performing a differentiation process on the captured image and detecting a portion where the gradation change is larger than a predetermined gradation change threshold.
  • Whether or not a captured image is acquired in step S11 can be determined based on whether or not captured images captured from a plurality of directions have been acquired.
  • the autonomous behavior type robot 1 After executing the process of step S12, the autonomous behavior type robot 1 generates spatial data (step S13).
  • the generation of the spatial data can be executed by the spatial data generation unit 13 by, for example, Hough transforming the point cloud data.
  • the autonomous behavior robot 1 After executing the process of step S13, the autonomous behavior robot 1 provides the generated spatial data to the robot 2.
  • the provision of the spatial data to the robot 2 may be sequentially provided as the spatial data is generated as shown in FIG. 3, or may be provided asynchronously with the processing shown in steps S11 to S18. Good.
  • the robot 2 provided with the spatial data can grasp the movable range based on the spatial data.
  • the autonomous behavior robot 1 determines whether or not to recognize a spatial element (step S15).
  • the determination of whether or not to recognize a spatial element can be executed by, for example, setting the imaging target recognition unit 15 whether or not to recognize a spatial element. Even if it is determined that the spatial element is recognized, if the recognition fails, it may be determined that the spatial element is not recognized.
  • the autonomous behavior robot 1 If it is determined that the spatial element is recognized (step S15: YES), the autonomous behavior robot 1 generates first visualization data (step S16).
  • the generation of the first visualization data can be executed in the visualization data generation unit 14.
  • the first visualization data is visualization data generated after the imaging object recognition unit 15 recognizes a spatial element. For example, when the imaging target recognition unit 15 determines that the spatial element is a table, the visualization data generation unit 14 does not have point cloud data even if the top surface of the table is not captured. Visualization data can be generated as if the top surface is flat. Further, when it is determined that the spatial element is a wall, the visualization data generation unit 14 can generate visualization data by assuming that a portion that has not been shot is also a plane.
  • the autonomous behavior robot 1 If it is determined that the spatial element is not recognized (step S15: NO), the autonomous behavior robot 1 generates second visualization data (step S17).
  • the generation of the second visualization data can be executed in the visualization data generation unit 14.
  • the second visualization data is visualization data that is generated based on the point cloud data and the spatial data generated from the captured image without the imaging target recognition unit 15 recognizing the spatial element.
  • the autonomous behavior robot 1 can reduce the processing load by not performing the spatial element recognition process.
  • the autonomous behavior type robot 1 After executing the process of step S16 or the process of step S17, the autonomous behavior type robot 1 provides visualization data (step S18).
  • the provision of the visualization data is executed when the visualization data provision unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
  • the autonomous behavior type robot 1 may generate and provide visualization data in response to a request from the user terminal 3, for example.
  • the autonomous behavior robot 1 ends the operation shown in the flowchart.
  • FIG. 4 is a flowchart illustrating another example of the operation of the robot control program in the embodiment.
  • the autonomous behavior robot 1 determines whether or not a creation instruction has been acquired (step S21).
  • the determination as to whether or not the creation instruction has been acquired can be made based on whether or not the designation acquisition unit 162 has acquired a creation instruction that instructs the user terminal 3 to create spatial data.
  • the creation instruction is acquired, for example, when the user executes an operation for reacquisition of the spatial data from the user terminal 3.
  • the creation instruction may be acquired when the user performs an operation of acquiring spatial data for an ungenerated area from the user terminal 3.
  • the autonomous behavior robot 1 repeats the process of step S21 and waits for the creation instruction to be acquired.
  • the autonomous behavior robot 1 executes a shooting instruction in an area instructed to create spatial data (step S22).
  • the shooting instruction can be executed when the instruction unit 113 instructs the robot 2 to make a shooting instruction.
  • the shooting instruction can include an area to be shot by the shooting unit 21 of the robot 2 and a shooting position.
  • the robot 2 that has received the imaging instruction provides the captured image to the autonomous behavior robot 1, the operation illustrated in FIG. 3 is executed, and spatial data and visualization data in the instructed area are created.
  • the autonomous behavior robot 1 ends the operation shown in the flowchart.
  • FIG. 4 shows the operation when a creation instruction, which is an explicit instruction from the user, is acquired from the user terminal 3, but the autonomous behavior robot 1 has, for example, ungenerated spatial data.
  • a shooting instruction may be issued without an explicit instruction from the user.
  • processing order in each step in the operation of the robot control program (robot control method) described in this embodiment does not limit the execution order.
  • FIGS. 5 to 10 are diagrams illustrating examples of display of the user terminal 3 according to the embodiment.
  • 5 to 10 are display examples in which a Web page provided as visualization data from the visualization data providing unit 161 is displayed on the touch panel of a smartphone exemplified as the user terminal 3.
  • the user terminal 3 displays the 3D visualization data generated by the visualization data generation unit 14 based on the captured image of the living room captured by the robot 2.
  • the visualization data generation unit 14 rasterizes and draws a figure such as furniture generated in the point cloud data.
  • feature points of point cloud data are subjected to Hough transform to extract line segments (straight lines or curves), and spatial data in which the shape of furniture or the like is represented by 3D line segments is generated.
  • the figure the case where the shape of furniture or the like is represented by straight lines in the x direction, the y direction, and the z direction is shown.
  • the plane formed in the combination of straight lines in the x, y, and z directions has a normal direction of the xy plane in the vertical direction, a plane normal direction in the horizontal direction of the xz plane, y -For the z plane, the brightness of the texture applied to the plane is changed.
  • the brightness on the xy plane which is the upper surface of the work table of the table or system kitchen, is low (the color is dark).
  • the brightness is moderate for the xz plane and the brightness is high for the yz plane.
  • point cloud data is deleted from a wall or ceiling having a predetermined height from the floor in the figure to generate spatial data. Since the robot 2 moves on the floor surface, space data such as a ceiling is not necessary. By not generating spatial data for a range farther than a predetermined distance from the robot 2, the amount of spatial data can be reduced and the visualization data can be easily viewed.
  • a “re-acquire” button 31 is displayed on the user terminal 3.
  • a creation instruction is transmitted to the autonomous behavior robot 1.
  • the autonomous behavior robot 1 provides the imaging instruction to the robot 2, recreates the spatial data and the visualization data based on the re-acquired captured image, and uses the visualization data as the user terminal 3.
  • Re-offer to That is, when the reacquisition button 31 is pressed, the visualization data of the living room displayed on the user terminal 3 is updated.
  • the autonomous behavior type robot 1 may cause the robot 2 to immediately take an action for photographing and recreate the spatial data and the visualization data at an early stage.
  • the autonomous behavior type robot 1 does not control the movement of the robot 2 but moves the robot 2 according to parameters (however, parameters different from the attribute information) that affect the behavior of the robot 2 such as emotion parameters.
  • An imaging instruction may be provided according to the position of 2, and spatial data and visualization data may be recreated based on the re-acquired captured image.
  • the robot 2 may act autonomously based on an internal parameter (for example, an emotion parameter). For example, the robot 2 may increase the emotion parameter indicating “boring” when it is determined that the robot 2 has been staying at the same place for a certain period of time. The robot 2 may start moving to an uncreated portion of the map on the condition that it is determined that “boring” has become a predetermined value or more.
  • the autonomous behavior robot 1 indirectly promotes the activity of the robot 2 by improving the “boring” parameter of the robot 1 to a predetermined value or more when receiving an instruction to create an uncreated part. Also good. Further, the autonomous behavior type robot 1 may provide a shooting instruction to the robot 2 so as not to affect the movement plan of the robot 2 when the creation instruction is received. In other words, the autonomous behavior robot 1 may provide the robot 2 with an instruction to move to start shooting when the movement of the robot 2 at the time of receiving the creation instruction is partially or completely completed. Good. For example, it is assumed that the robot 2 is moving to the movement target point S, and the robot 2 receives a creation instruction during this movement. At this time, the robot 2 may move to a movement target point S or a predetermined point (for example, a hallway) from the current location to the movement target point S and then start an action based on the creation instruction.
  • a movement target point S or a predetermined point for example, a hallway
  • a “2D display” button 32 is displayed on the user terminal 3.
  • the user terminal 3 transmits an instruction to change the 3D visualization data to 2D visualization data to the autonomous behavior robot 1.
  • the autonomous behavior robot 1 generates visualization data that represents the generated spatial data in 2D and provides it to the user terminal 3. The display when the 2D display button 32 is pressed will be described with reference to FIG.
  • the user can enlarge or reduce the visualization data by pinching in or out the touch panel on the user terminal 3. Further, the user can move the viewpoint of 3D display by sliding the touch panel.
  • FIG. 6 is a display example when the 2D display button 32 in FIG. 5 is pressed.
  • the user terminal 3 displays visualization data obtained by converting the 2D visualization data displayed in FIG. 5 into 2D.
  • a “3D display” button 33 is displayed on the user terminal 3.
  • the user terminal 3 transmits an instruction to change the 2D visualization data to 3D visualization data to the autonomous behavior robot 1.
  • the autonomous behavior robot 1 generates visualization data that represents the generated spatial data in 3D and provides it to the user terminal 3.
  • the display when the 3D display button 33 is pressed is the display described in FIG.
  • FIG. 6 (A) a kitchen that was not displayed in 3D display is displayed in the living room. Since the 3D display is a display when viewed from the viewpoint position, a portion hidden by a wall or furniture cannot be displayed. However, by using 2D display, it is possible to display visualization data obtained by looking down on the room from above, making it easy to grasp the shape of the entire room.
  • the boundary line 34 indicates the generation range of the spatial data. That is, the area above the boundary line 34 indicates that the captured image has not been acquired by the robot 2 and the spatial data has not been generated. When the user designates the boundary line 34 displayed on the touch panel or an area above the boundary line 34 and presses the re-acquisition button 31, the robot 2 moves to the designated area and photographs the room.
  • the autonomous behavior type robot 1 generates spatial data of an ungenerated area based on the captured image and updates the spatial data.
  • the update of the spatial data can be executed by, for example, matching the original spatial data and the newly generated spatial data to generate continuous spatial data.
  • the autonomous behavior type robot 1 generates visualization data based on the updated spatial data and provides it to the user terminal 3.
  • the house-shaped mark h shown in the figure indicates a home position where the robot 2 returns for charging.
  • FIG. 6B shows that the visualization data updated in the autonomous behavior robot 1 is displayed on the user terminal 3.
  • FIG. 6B shows that a long and narrow corridor exists on the upper side of the living room, and that a Western-style room exists on the right side of the corridor.
  • the Western-style space data is not generated. For example, by displaying the position of the door, it may be displayed with a boundary line that an ungenerated area still exists. . Note that there are doors that could not be detected in the hallway, and there is actually an area above the hallway.
  • FIG. 7 shows that 2D visualization data generated in all rooms at home is displayed on the user terminal 3.
  • the 3D display button 33 in FIG. 7 the display is switched to the simple 3D display shown in FIG.
  • the simplified 3D display refers to a display form such as a bird's-eye view obtained by transforming the 2D display and looking down from above.
  • the simple 3D display may include, for example, a three-dimensional display adapted to a predetermined shape in the height direction for furniture or the like.
  • FIG. 9 is a display example of visualization data generated in a plurality of divided areas.
  • the user terminal 3 displays visualization data 35 by area.
  • the area-specific visualization data 35 displays scrollable areas divided into living rooms, hallways, bedrooms, children's rooms, and the like. For example, when the user selects and presses one area, the visualization data of the pressed area is displayed.
  • FIG. 5 is visualization data when a living room is selected in the visualization data 35 by area.
  • the area-specific visualization data 35 shown in FIG. 9 may be displayed so that other rooms can be selected.
  • FIG. 10 is a display example for explaining misrecognition of a spatial element depending on the photographing position of the photographing unit.
  • the user terminal 3 displays a visualized image 37A of a shelf generated based on a photographed image photographed by the robot 2 in the direction of the arrow shown from the photographing position 36A. Since the back side of the shelf cannot be photographed from the photographing position 36A, the spatial data generation unit 13 does not correctly recognize the shape of the shelf. The shape of the erroneously recognized shelf exists up to the wall. Accordingly, the autonomous behavior robot 1 determines that it cannot move on the back side of the shelf. On the other hand, the user can easily confirm that the autonomous behavior robot 1 erroneously recognizes the shape of the shelf by visually recognizing the shape of the visualized image 37A.
  • FIG. 10B shows that the visualization data updated in the autonomous behavior robot 1 is displayed.
  • a visualized image 37B of a shelf generated based on a photographed image taken by the robot 2 in the direction indicated by the arrow from the photographing position 36B is displayed on the user terminal 3. Since the back side of the shelf can be photographed from the photographing position 36B, the spatial data generation unit 13 can correctly recognize the shape of the shelf together with the photographed image photographed from the photographing position 36A. A correctly recognized shelf shape has a gap with the wall. Therefore, it can be determined that the autonomous behavior robot 1 can move on the back side of the shelf. The user can easily confirm that the autonomous behavior robot 1 correctly recognizes the shape of the shelf by visually recognizing the shape of the visualized image 37B.
  • this designation method may be an operation of sliding on the screen of the user terminal 3 linearly from the position where the fingertip is touched.
  • the starting point of the trajectory of the fingertip is set as the shooting position, and the direction in which the fingertip is slid is shot. Specify as direction.
  • an arrow icon is displayed over the visualization data on the screen. Since the user knows the shape of the shelf, the user can instruct a shooting position and a shooting direction for correctly recognizing the shape of the shelf that the autonomous behavior robot 1 has erroneously recognized.
  • a program for realizing the functions constituting the apparatus described in this embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed.
  • the various processes described above in the present embodiment may be performed.
  • the “computer system” may include an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
  • the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what implement
  • the visualization data providing unit 161 of FIG. 1 may provide the user terminal 3 with the image (captured image) that is the source of the visualization data.
  • the user terminal 3 may display the captured image on condition that the user's operation is detected while displaying the visualization data.
  • an image obtained by capturing a part of the floor plan displayed on the user terminal 3 may be displayed by the user. That is, an image used for specifying each spatial element is accumulated, and when the user designates a spatial element, an image associated with the spatial element is provided. Accordingly, when the user cannot determine the recognition state from the visualization data, the user can determine the recognition state using the image.
  • the range to be recreated may be designated as the method for designating the recreated portion described with reference to FIG.
  • This designation method may be an operation of sliding the fingertip on the screen of the user terminal 3 in a manner of drawing a circle with the fingertip, and is designated by moving the fingertip so as to surround the range to be recreated.
  • the fingertip trajectory is drawn on the screen, and the fingertip trajectory is visualized so as to be superimposed on the visualization data.
  • the name of the space or the display mode of the space according to the internal parameter set in the space of the robot 2 May be set.
  • a name corresponding to the type of the parameter may be set in the space.
  • the internal parameter of the robot 2 is a parameter set that indicates the emotion of the robot 2 such as fun or lonely
  • a name such as “fun room” may be given to the space.
  • the data providing apparatus 10 may include an emotion parameter control unit (not shown).
  • the emotion parameter control unit may change the emotion parameter of the robot 2 according to the output value of the sensor. For example, when the robot 2 detects a touch or hug from the user by a built-in touch sensor, the emotion parameter control unit (not shown) may increase the emotion parameter “fun” of the robot 2.
  • the touch sensor detects a contact that does not detect the user for a predetermined period or longer in the image captured by the camera of the robot 2 or has a high strength and a short duration (for example, a contact when struck).
  • the emotion parameter control unit may reduce the emotion parameter of “fun”.
  • the spatial data generation unit sets the name “fun room” for the room R1. May be.
  • the names corresponding to some types of parameters are set, and the names corresponding to the remaining types of parameters may not be set for the remaining types of parameters.
  • a name indicating a positive meaning for example, a fun room
  • a name indicating a negative meaning for example, a boring room
  • a figure or color corresponding to the type of the parameter may be set in the space.
  • the data providing apparatus 10 may include a person detection unit (not shown) that acquires an image captured by the camera of the robot 2 and detects a person appearing in the captured image. Further, the data providing apparatus 10 may include an object detection unit (not shown) that acquires an image captured by the camera of the robot 2 and detects an object reflected in the captured image. For example, in a space in which a person named “Daddy” is detected at a predetermined frequency or more by the person detection unit, a picture of the Daddy or a figure showing the Daddy in a pseudo manner or a person such as “Daddy's Room” An attribute such as a name may be set in the space.
  • the “predetermined frequency” may be the number of times the target person is detected per unit time, or may be the length of time during which the target person is detected in a predetermined time (for example, 1 hour).
  • an attribute such as a name that does not depend on a specific person, such as “everyone's room”, may be set in the space.
  • the display of the space or the name of the space may be set.
  • the name given to the space and the display of the space may be different depending on the parameter set indicating the behavior characteristics of each robot 2 (for example, the parameter set indicating the personality or gender of each individual).
  • the personality of each individual may be set by the user via the user terminal 3, and learning based on the detection result of the sensor provided in the robot 2 is performed in the personality learning unit (not shown) of the data providing apparatus 10. May be obtained. For example, an individual who has a parameter indicating that he / she is actively involved with another person when he / she learns the experience of detecting by a touch sensor that he / she has been held by a person who has been determined to be unknown by person detection processing (an individual who is not shy) May be set.
  • attributes such as a figure indicating a person and the name of the person may be preferentially reflected in the display or name of the room (for example, “Everybody's Room”).
  • an individual with parameters indicating that he / she wants to avoid involvement with an unknown person is set. May be.
  • internal parameters may be preferentially reflected in the display or name of the room (for example, “fun room”).
  • individual characters may be set in advance at the time of manufacture or at the time of shipment.
  • a template such as a map or a spatial element may be changed according to a parameter set indicating the behavior characteristics of each robot 2.
  • a display of characters or figures that prompt the user's predetermined actions such as accompanying the user according to the map may be output.
  • this display is displayed according to the character of the robot 2, or the display content may be changed.
  • the robot 2 may be set as an individual having parameters of behavioral characteristics that are curious.
  • the emotion parameter control unit is required to touch a thing (spatial element) that is not recognized by the user and detect the user's smile. It may be set so that the parameter indicating is easy to rise and difficult to fall.
  • the robot 2 may be set as an individual having a parameter indicating a cowardly behavior characteristic.
  • the robot 2 exhibiting timid behavior characteristics, when a button for instructing creation of an ungenerated map is pressed, the user's behavior such as “accompanied by the user” is not allowed. A message or a figure for prompting may be output.
  • Robot 2 may acquire a map (spatial data) from another robot 2.
  • the spatial data generation unit 13 may connect a map acquired from another robot 2 to a map acquired by itself.
  • a map acquired from another robot may be displayed in a different expression from the map acquired by itself. According to such a control method, since the plurality of robots 2 can share knowledge with each other when generating the spatial data, the entire spatial data can be obtained earlier than when the spatial data is generated by only one robot 2. Can be generated.
  • the spatial data generation unit 13 in FIG. 1 may set attribute information based on a person's flow line.
  • the trajectory of movement of people in the building is called a flow line, and the flow line is determined by the layout and arrangement of furniture.
  • a flow line from a door serving as an entrance to the sofa and a flow line from the door to the dining table.
  • These flow lines are areas where people frequently pass, and obstacles are removed so as not to obstruct people's movement.
  • the flow line is an area where safety of walking should be implicitly ensured, and if it is on the flow line, a person can safely move unconsciously.
  • the human may not be aware of the robot and may come into contact with it.
  • People's lives vary depending on the time of day and day of the week.For example, on weekday mornings, people move around the flow line without difficulty, and during the daytime on holidays, people take a leisurely flow line while worrying about their surroundings. Moving. That is, when moving along the flow line, the interest that a person turns to the surroundings also changes, and the possibility of contact with the robot also changes. For example, if the robot is positioned on the flow line during commuting or school, the possibility of contact with a person is higher than during the daytime on holidays. Also, during the daytime on weekdays, the number of people at home is reduced, so the possibility of contact is reduced.
  • the robot 2 acting autonomously estimates the flow line of the person and moves away from the estimated flow line, thereby avoiding contact with the person.
  • the robot grasps the human flow line, the robot can move while avoiding the human flow line.
  • the robot 2 estimates a human flow line based on information acquired from a space such as a floor plan, the size and type of furniture, and the position of the furniture, and is within a predetermined range from the estimated human flow line. If the robot behaves according to the condition that it does not enter, it will be easier to avoid contact with people.
  • the spatial data generation unit 13 may set attribute information indicating a flow line according to the number or movement trajectory of people in each time zone or the number or movement trajectory of objects. For example, when it is indicated from the past data that the space overlaps with the movement locus of the person in a certain time zone (hereinafter referred to as “flow line region”), the instruction unit 113 Attribute information that prohibits the robot 2 from entering the flow line area (attribute information such as a clearance of 30 cm or more from the flow line area) may be set. The robot 2 moves by controlling the moving mechanism 29 in addition to the flow line area in addition to the restriction at the time of action.
  • the restriction in the flow line area may be a restriction that prohibits the robot 2 from entering the flow line area, or may enter the flow line area, but stop in the flow line area for a predetermined time or more. It may be a restriction that it should not. Further, in the flow line area, assuming that the movement of a person is given priority, the movement direction may be controlled so as to leave the flow line area when the robot 2 recognizes the presence of the person. That is, in the flow line region, behavior restrictions are not always imposed, and behavior restrictions may be imposed according to a relative positional relationship with a person.
  • the attribute information for prohibiting entry into the kitchen may be set from 6:00 am to 7:00 am. If the user notices that the robot 2 does not enter when the kitchen is used for cooking or the like, and the robot 2 enters when the kitchen is not busy, the robot 2 does not interfere with the user. In addition, it is possible to feel the wisdom that the robot 2 is acting in accordance with the user's situation.
  • attribute information prohibiting staying for a predetermined time or longer in a specified space is set according to the number or movement trajectory of people in each time zone or the number or movement trajectory of objects. May be. For example, in the past data, when movement of a person is recorded in the entire range of the kitchen from 6:00 am to 7:00 am, the predetermined time (eg, 6:00 am to 7:00 am) (3 minutes) Attribute information for prohibiting the stay may be set. Alternatively, when the robot 2 detects a person within a predetermined distance from the kitchen or the kitchen, the robot 2 may avoid entering the kitchen, or avoid staying for a predetermined time or longer. Also good.
  • attribute information may be set according to the visibility in space and the number or movement trajectory of people or the number or movement trajectory of objects. Instead of or in addition to these, attribute information that prohibits staying for a predetermined time or longer may be set according to the visibility in space and the number or movement trajectory of people, or the number or movement trajectory of objects.
  • attribute information for prohibiting the robot 2 from entering the flow line area may be set. According to such a control method, it is possible to reduce the risk of contact between the human and the robot 2 in a space with low visibility such as darkness.
  • the number of people present, the movement trajectory, the number of objects present, and the movement trajectory can be recognized based on past output data of the sensor such as a captured image or output data of the temperature sensor.
  • the flow line area may be set by being designated by the user via the user terminal 3 of FIG.
  • the user can specify the time zone and the flow line in the visualized image shown in FIG.
  • the flow line is designated by sliding the fingertip on the screen of the user terminal 3 on which the visualized image is displayed.
  • a flow line superimposed on the floor plan is drawn in conjunction with the movement of the fingertip.
  • the flow line designated by the user is obtained by the designation obtaining unit 162 in FIG.
  • a spot is displayed on a screen representing visualization data (see FIGS. 7 and 8) based on data received from the user terminal 3, for example, so as to move the robot 2 when a predetermined condition is satisfied.
  • a predetermined condition is, for example, after detecting that the user has gone out from information indicating the user's position directly or indirectly, such as the position information of the user terminal 3, and then toward the space where the robot 2 exists. It may be a condition that it is detected that it is moving.
  • the predetermined condition is a condition that it is detected that the user is moving toward a predetermined spot from information indicating the user's position directly or indirectly, such as the position information of the user terminal 3. Also good. Further, the visualization data providing unit 161 that has detected such designation may display a mark corresponding to the spot.
  • coordinate information indicating the spot is specified in the spatial data.
  • This coordinate information is stored as spot attribute information.
  • the direction of the robot 2 before or after the arrival of the spot may be set together.
  • the orientation of the robot 2 may be automatically set according to the spatial element of the spatial data (for example, may be set to face the center direction of the door), or via the user terminal 3. May be specified by the user.
  • the mark corresponding to the spot output on the screen is a design representing the direction of the robot, the user can recognize the direction of the robot by the angle of the mark.
  • a direction mark such as an arrow or a triangle may be displayed together with the mark to express the direction of the robot 2 before and after the spot arrival. For example, when the user touches a button for instructing right rotation or left rotation, the mark may be rotated or the direction of the direction mark may be changed.
  • the orientation of the robot visualized in this way is also stored as spot attribute information.
  • the autonomous behavior robot 1 may generate a spot movement event when receiving a wireless signal emitted from, for example, a user terminal 3 of a user (for example, a smartphone carried by the user).
  • a spot movement event occurs, the autonomous behavior robot 1 specifies a route from the current position of the robot 2 to the spot.
  • the robot 2 controls the moving mechanism 29 to follow the specified route, moves to the spot, and adjusts the posture in the set direction.
  • the spot setting may not be accepted at a stage where the map is not completed or at a stage where the degree of completion of the map is low.
  • the autonomous behavior robot 1 acquires the position of the user terminal 3 of the user (for example, a smartphone) from GPS (Global Positioning System), and the position is outside a predetermined range (for example, outside the user's house).
  • the robot 2 may be instructed to move to a spot.
  • a user's user terminal 3 for example, smart phone
  • the movement of the robot 2 may be controlled by an instruction via a screen representing the visualization data (see FIGS. 7 and 8).
  • the user designates a moving destination by tapping a place to be moved by the robot 2 on the screen.
  • the movement destination mark is displayed on the screen of the visualization data.
  • the autonomous behavior type robot 1 specifies a route from the current position of the robot 2 to the moving destination.
  • the robot 2 follows the route by controlling the moving mechanism 29 and moves to the moving destination. Similar to the example described above, the orientation of the robot 2 may be set.
  • an instruction for a movement destination may not be accepted when the map is not completed or when the map is not complete. When the map is generated only in some rooms, the movement destination instruction may be received only in the room.
  • the entry prohibition range of the robot 2 on the condition that the operation surrounding the area where the user does not want to enter the robot 2 is detected on the screen representing the visualization data May be set.
  • the robot 2 searches for a route through a route avoiding the entry prohibition range, and moves according to the searched route.
  • the user wishes to prohibit the robot 2 from entering the bedroom.
  • the user performs an operation of surrounding an area corresponding to the bedroom with a finger.
  • the spatial data generation unit 13 sets a space corresponding to the closed area drawn on the screen detected by the user terminal 3 as an “entry prohibition range”, and generates spatial data. According to such a control method, the user can set the robot entry prohibition range with an intuitive operation on the visualization data screen of the smartphone.
  • a SLAM (Simultaneous Localization and Mapping) technique that simultaneously estimates self-location and creates an environmental map may be applied.
  • the SLAM technology application examples such as a self-propelled vacuum cleaner and a drone are known.
  • the autonomous behavior robot 1 is different from the SLAM technology of a self-propelled cleaner that generates a plane map in that it generates spatial data representing a three-dimensional space. Further, since the autonomous behavior robot 1 generates spatial data representing a closed space surrounded by a wall surface and a ceiling, the autonomous behavior robot 1 is different from the SLAM technology of a drone that is used outdoors and generates spatial data that is not closed.
  • the characteristics of the wall and ceiling can be used, so it is easier to estimate the self-position than in the case of outdoors without walls and ceilings.
  • the number of feature points included in the closed space is limited, the number of verifications by matching with feature points included in the currently photographed image is smaller than in the case of an infinitely wide outdoor space. Therefore, there are advantages that the estimation accuracy is high and the time until the estimation result is calculated is short.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

An autonomous action robot comprises: a movement mechanism; an imaging unit to image the surrounding space; a space data generation unit to generate space data for a recognized space on the basis of the image imaged by the imaging unit; a visualization data generation unit to generate visualization data that visualizes space elements contained in the space on the basis of the generated space data; and a visualization data supply unit to supply the generated visualization data.

Description

自律行動型ロボット、データ提供装置およびデータ提供プログラムAutonomous behavior robot, data providing apparatus, and data providing program
 本発明は、自律行動型ロボット、データ提供装置およびデータ提供プログラムに関する。 The present invention relates to an autonomous behavior robot, a data providing apparatus, and a data providing program.
 従来から、家屋内を自律的に移動しながらカメラで画像を撮影し、撮影画像から屋内の空間を認識し、認識している空間に基づき移動経路を設定して屋内を移動するロボットがある(例えば、特許文献1を参照)。 Conventionally, there is a robot that takes an image with a camera while moving autonomously inside a house, recognizes the indoor space from the captured image, and sets the movement path based on the recognized space and moves indoors ( For example, see Patent Document 1).
特開2016-103277号公報JP 2016-103277 A
 ロボットは、撮影した画像に基づき、部屋、壁または室内の家具、家電製品、観葉植物もしくは荷物等の、形状または位置等の空間に係る要素(以下、「空間要素」という。)を認識する。撮影位置や撮影角度等の撮影条件が適切でない場合、ロボットは、空間要素を誤って認識することがある。誤認識が原因となり、ロボットは人の思惑とは異なる動きをしてしまう。 The robot recognizes an element related to a space such as a shape or a position (hereinafter referred to as a “space element”) such as a room, a wall or indoor furniture, a home appliance, a houseplant, or a luggage based on the photographed image. When shooting conditions such as a shooting position and a shooting angle are not appropriate, the robot may erroneously recognize a spatial element. Due to misrecognition, the robot moves differently from human speculation.
 本発明は上記事情に鑑みてなされたものであり、1つの実施形態において、ロボットが認識している空間要素を人が確認することができる、自律行動型ロボット、データ提供装置およびデータ提供プログラムを提供することを一つの目的とする。 The present invention has been made in view of the above circumstances, and in one embodiment, an autonomous behavior robot, a data providing apparatus, and a data providing program capable of confirming a spatial element recognized by a robot are provided. One purpose is to provide.
 (1)上記の課題を解決するため、実施形態の自律行動型ロボットは、移動機構と、周囲の空間を撮影する撮影部と、前記撮影部において撮影された撮影画像に基づいて、前記空間を認識した空間データを生成する空間データ生成部と、生成された前記空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成部と、生成された前記可視化データを提供する可視化データ提供部とを備える。 (1) In order to solve the above-described problem, the autonomous behavior robot according to the embodiment includes a moving mechanism, a photographing unit that photographs a surrounding space, and the space based on a photographed image photographed by the photographing unit. A spatial data generation unit that generates recognized spatial data, a visualization data generation unit that generates visualization data that visualizes spatial elements included in the space, based on the generated spatial data, and the generated visualization data A visualization data providing unit for providing
 (2)また、実施形態の自律行動型ロボットにおいて、提供された前記可視化データに含まれる領域の指定を取得する指示取得部を更に備え、前記空間データ生成部は、取得された前記指定に係る領域における空間を認識した空間データを再生成する。 (2) The autonomous behavior robot according to the embodiment further includes an instruction acquisition unit that acquires designation of a region included in the provided visualization data, and the spatial data generation unit relates to the obtained designation Spatial data recognizing the space in the region is regenerated.
 (3)また、実施形態の自律行動型ロボットにおいて、前記可視化データ生成部は、前記撮影部において撮影された撮影画像から認識された空間要素に基づき、認識された前記空間要素の特徴を反映した前記可視化データを生成する。 (3) In the autonomous behavior robot according to the embodiment, the visualization data generation unit reflects the recognized feature of the spatial element based on the spatial element recognized from the captured image captured by the imaging unit. The visualization data is generated.
 (4)また、実施形態の自律行動型ロボットにおいて、前記可視化データ生成部は、前記撮影部において撮影された撮影画像の色情報に基づき、前記空間要素に色の属性を付与した前記可視化データを生成する。 (4) In the autonomous behavior robot according to the embodiment, the visualization data generation unit may be configured to display the visualization data in which a color attribute is given to the space element based on color information of a captured image captured by the imaging unit. Generate.
 (5)また、実施形態の自律行動型ロボットにおいて、前記可視化データ生成部は、固定されている空間要素と移動可能な空間要素とを区別した前記可視化データを生成する。 (5) In the autonomous behavior robot according to the embodiment, the visualization data generation unit generates the visualization data in which a fixed spatial element is distinguished from a movable spatial element.
 (6)また、実施形態の自律行動型ロボットにおいて、前記可視化データ生成部は、前記空間データの経時的な変化に基づき、前記固定されている空間要素と前記移動可能な空間要素とを区別した前記可視化データを生成する。 (6) In the autonomous behavior robot according to the embodiment, the visualization data generation unit distinguishes the fixed spatial element and the movable spatial element based on a temporal change of the spatial data. The visualization data is generated.
 (7)また、実施形態の自律行動型ロボットにおいて、前記可視化データ生成部は、前記移動機構において移動した位置から所定の範囲に含まれる前記空間データを可視化した前記可視化データを生成する。 (7) In the autonomous behavior robot according to the embodiment, the visualization data generation unit generates the visualization data obtained by visualizing the spatial data included in a predetermined range from a position moved by the moving mechanism.
 (8)また、実施形態の自律行動型ロボットにおいて、前記撮影部において撮影された撮影画像に基づき三次元の点群データを生成する点群データ生成部をさらに備え、前記空間データ生成部は、生成された前記点群データに基づき前記空間データを生成する。 (8) Moreover, in the autonomous behavior type robot of the embodiment, the robot further includes a point cloud data generation unit that generates three-dimensional point cloud data based on a captured image captured by the imaging unit, and the spatial data generation unit includes: The spatial data is generated based on the generated point cloud data.
 (9)また、実施形態の自律行動型ロボットにおいて、前記空間データ生成部は、前記点群データにおいて前記空間要素の輪郭を特定することにより前記空間データを生成し、前記可視化データ生成部は、特定された前記輪郭に基づき前記空間要素を可視化した前記可視化データを生成する。 (9) In the autonomous behavior type robot according to the embodiment, the spatial data generation unit generates the spatial data by specifying an outline of the spatial element in the point cloud data, and the visualization data generation unit includes: The visualization data in which the spatial element is visualized based on the identified outline is generated.
 (10)上記の課題を解決するため、実施形態のデータ提供装置は、ロボットが空間を認識することで得られる空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成部と、生成された前記可視化データを提供する可視化データ提供部とを備える。 (10) In order to solve the above-described problem, the data providing apparatus according to the embodiment generates visualization data in which the spatial elements included in the space are visualized based on the spatial data obtained by the robot recognizing the space. A visualization data generation unit; and a visualization data provision unit that provides the generated visualization data.
 (11)また、実施形態のデータ提供装置は、提供された前記可視化データに含まれる領域の指定を取得する指定取得部と、前記ロボットに対して、取得された前記指定に係る領域における前記空間の認識を指示する指示部とをさらに備える。 (11) Further, the data providing apparatus according to the embodiment includes a designation obtaining unit that obtains designation of an area included in the provided visualization data, and the space in the area related to the designation obtained with respect to the robot. And an instruction unit for instructing recognition.
 (12)上記の課題を解決するため、実施形態のデータ提供プログラムは、コンピュータに、ロボットが空間を認識することで得られる空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成機能と、生成された前記可視化データを提供する可視化データ提供機能とを実現させる。 (12) In order to solve the above-described problem, the data providing program according to the embodiment is a visualization data obtained by visualizing a spatial element included in the space based on the spatial data obtained by the robot recognizing the space. A visualization data generation function for generating the visualization data and a visualization data provision function for providing the generated visualization data are realized.
 一つの実施形態によれば、ロボットが認識している空間要素を人が確認することができる、自律行動型ロボット、データ提供装置およびデータ提供プログラムを提供することができる。 According to one embodiment, it is possible to provide an autonomous behavior robot, a data providing apparatus, and a data providing program that allow a person to confirm a spatial element recognized by the robot.
実施形態における自律行動型ロボットのソフトウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the software structure of the autonomous behavior type robot in embodiment. 実施形態における自律行動型ロボットのハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of the autonomous behavior type robot in embodiment. 実施形態における自律行動型ロボット制御プログラムの動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of the autonomous behavior type robot control program in embodiment. 実施形態における自律行動型ロボット制御プログラムの動作の他の一例を示すフローチャートである。It is a flowchart which shows another example of operation | movement of the autonomous behavior type robot control program in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment. 実施形態における利用者端末の表示の一例を示す図である。It is a figure which shows an example of the display of the user terminal in embodiment.
 以下、図面を参照して本発明の一実施形態における自律行動型ロボット、データ提供装置およびデータ提供プログラムについて詳細に説明する。一般的に、ロボットは、カメラやマイク等の様々なセンサを有し、それらのセンサから得られる情報を総合的に判断することで周囲の状況を認識する。ロボットが移動するためには、空間に存在する種々の物体を認識し、移動ルートを判断する必要があるが、物体を正しく認識できないために移動ルートが適切で無いことがある。誤認識が原因となり、例えば、人が十分に広い空間があると思っても、ロボットは障害物があり狭い範囲しか動けないとして認識してしまう場合がある。このように人とロボットとの間に認識の齟齬が生じると、人の期待に反した行動をロボットがおこなうことになり、人はストレスを感じる。そこで、人とロボットの認識の齟齬を減らすために、本発明の自律型行動ロボットは、自身の認識状態を可視化して人に提供するとともに、人に指摘された箇所に対して再度認識処理をおこなうことができる。 Hereinafter, an autonomous behavior robot, a data providing apparatus, and a data providing program according to an embodiment of the present invention will be described in detail with reference to the drawings. Generally, a robot has various sensors such as a camera and a microphone, and recognizes surrounding conditions by comprehensively determining information obtained from these sensors. In order for the robot to move, it is necessary to recognize various objects in the space and determine the moving route. However, the moving route may not be appropriate because the object cannot be recognized correctly. Due to misrecognition, for example, even if a person thinks that there is a sufficiently large space, the robot may recognize that there is an obstacle and that only a narrow range can move. When a recognition trap occurs between a person and a robot in this way, the robot will behave against the expectation of the person, and the person will feel stress. Therefore, in order to reduce the discrepancies between human and robot recognition, the autonomous behavior robot of the present invention visualizes its recognition state and provides it to the person, and performs recognition processing again on the point indicated by the person. Can be done.
 先ず、図1を用いて、自律行動型ロボット1のソフトウェア構成を説明する。図1は、実施形態における自律行動型ロボット1のソフトウェア構成の一例を示すブロック図である。 First, the software configuration of the autonomous behavior type robot 1 will be described with reference to FIG. FIG. 1 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot 1 according to the embodiment.
 図1において、自律行動型ロボット1は、データ提供装置10およびロボット2を有する。データ提供装置10とロボット2は通信にて接続されて、自律行動型ロボット1(ロボット2を自律行動させるためのシステム)として機能する。ロボット2は、撮影部21および移動機構29を有する移動式ロボットである。データ提供装置10は、第1通信制御部11、点群データ生成部12、空間データ生成部13、可視化データ生成部14、撮影対象認識部15および第2通信制御部16の各機能を有する。第1通信制御部11は、撮影画像取得部111、空間データ提供部112および指示部113の各機能を有する。第2通信制御部16は、可視化データ提供部161、指定取得部162の各機能を有する。本実施形態における自律行動型ロボット1のデータ提供装置10の上記各機能は、データ提供装置10を制御するデータ提供プログラム(ソフトウェア)によって実現される機能モジュールであるものとして説明する。 1, the autonomous behavior type robot 1 includes a data providing device 10 and a robot 2. The data providing device 10 and the robot 2 are connected by communication and function as an autonomous behavior type robot 1 (system for causing the robot 2 to autonomously act). The robot 2 is a mobile robot having a photographing unit 21 and a moving mechanism 29. The data providing apparatus 10 has functions of a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, a photographing target recognition unit 15, and a second communication control unit 16. The first communication control unit 11 has functions of a captured image acquisition unit 111, a spatial data providing unit 112, and an instruction unit 113. The second communication control unit 16 has the functions of the visualization data providing unit 161 and the designation acquiring unit 162. Each function of the data providing device 10 of the autonomous behavior robot 1 in the present embodiment will be described as a functional module realized by a data providing program (software) that controls the data providing device 10.
 データ提供装置10は、自律行動型ロボット1の機能の一部を実行することができる装置であって、例えば、ロボット2と物理的に近い場所に設置され、ロボット2と通信し、ロボット2の処理の負荷を分散させるエッジサーバである。なお、本実施形態において自律行動型ロボット1は、データ提供装置10とロボット2とにおいて構成される場合を説明するが、データ提供装置10の機能は、ロボット2の機能に含まれるものであってもよい。また、ロボット2は、空間データに基づき移動可能なロボットであって、空間データに基づき移動範囲が定められるロボットの一態様である。データ提供装置10は、1つの筐体において構成されるものであっても、複数の筐体から構成されるものであってもよい。 The data providing apparatus 10 is an apparatus that can execute a part of the functions of the autonomous behavior robot 1. For example, the data providing apparatus 10 is installed in a place physically close to the robot 2, communicates with the robot 2, and This is an edge server that distributes the processing load. In the present embodiment, the autonomous behavior robot 1 will be described as being configured by the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. Also good. The robot 2 is a robot that can move based on the spatial data, and is a mode of the robot in which the movement range is determined based on the spatial data. The data providing apparatus 10 may be configured with one casing or may be configured with a plurality of casings.
 第1通信制御部11は、ロボット2との通信機能を制御する。ロボット2との通信方式は任意であり、例えば、無線LAN(Local Area Network)、Bluetooth(登録商標)、または赤外線通信等の近距離無線通信、もしくは有線通信等を用いることができる。第1通信制御部11が有する、撮影画像取得部111、空間データ提供部112および指示部113の各機能は、第1通信制御部11において制御される通信機能を用いてロボット2と通信する。 The first communication control unit 11 controls a communication function with the robot 2. The communication method with the robot 2 is arbitrary, and for example, wireless LAN (Local Area Network), Bluetooth (registered trademark), near field communication such as infrared communication, wired communication, or the like can be used. Each function of the captured image acquisition unit 111, the spatial data providing unit 112, and the instruction unit 113 included in the first communication control unit 11 communicates with the robot 2 using a communication function controlled by the first communication control unit 11.
 撮影画像取得部111は、ロボット2の撮影部21により撮影された撮影画像を取得する。撮影部21は、ロボット2に設けられて、ロボット2の移動に伴い撮影範囲を変更することができる。 The captured image acquisition unit 111 acquires a captured image captured by the imaging unit 21 of the robot 2. The imaging unit 21 is provided in the robot 2 and can change the imaging range as the robot 2 moves.
 撮影部21は、1台または複数台のカメラで構成することができる。例えば、撮影部21が2台のカメラで構成されるステレオカメラである場合、撮影部21は撮影対象である空間要素を異なる撮影角度から立体的に撮影することが可能となる。撮影部21は、例えば、CCD(Charge-Coupled Device)センサまたはCMOS(Complementary Metal Oxide Semiconductor)センサ等の撮像素子を用いたビデオカメラである。2台のカメラ(ステレオカメラ)で空間要素を撮影することにより、空間要素の形状を測定することができる。また、撮影部21は、ToF(Time of Flight)技術を用いたカメラであってもよい。ToFカメラにおいては、変調された赤外光を空間要素に照射して、空間要素までの距離を測定することにより、空間要素の形状を測定することができる。また、撮影部21は、ストラクチャードライトを用いるカメラであってもよい。ストラクチャードライトは、ストライプ、または格子状のパターンの光を空間要素に投影するライトである。撮影部21は、ストラクチャードライトと別角度から空間要素を撮影することにより、投影されたパターンの歪みから空間要素の形状を測定することができる。撮影部21は、これらのカメラのいずれか1つ、または2つ以上の組合せであってもよい。 The photographing unit 21 can be composed of one or a plurality of cameras. For example, when the photographing unit 21 is a stereo camera composed of two cameras, the photographing unit 21 can three-dimensionally photograph a spatial element that is a photographing target from different photographing angles. The imaging unit 21 is a video camera using an image sensor such as a CCD (Charge-Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor. The shape of the spatial element can be measured by photographing the spatial element with two cameras (stereo cameras). Further, the photographing unit 21 may be a camera using ToF (Time of Flight) technology. In the ToF camera, the shape of the spatial element can be measured by irradiating the spatial element with modulated infrared light and measuring the distance to the spatial element. The photographing unit 21 may be a camera using a structured light. A structured light is a light that projects light in a stripe or lattice pattern onto a spatial element. The imaging unit 21 can measure the shape of the spatial element from the distortion of the projected pattern by imaging the spatial element from a different angle from the structured light. The imaging unit 21 may be any one of these cameras or a combination of two or more.
 また、撮影部21は、ロボット2に取付けられてロボット2の移動に合わせて移動するものである。しかし、撮影部21は、ロボット2とは分離して設置されるものであってもよい。 Further, the photographing unit 21 is attached to the robot 2 and moves in accordance with the movement of the robot 2. However, the photographing unit 21 may be installed separately from the robot 2.
 撮影部21で撮影された撮影画像は、第1通信制御部11に対応する通信方式において撮影画像取得部111に対して提供される。撮影された撮影画像は、ロボット2の記憶部に一時的に記憶されて、撮影画像取得部111は、リアルタイムにまたは所定の通信間隔で一時記憶された撮影画像を取得する。 The captured image captured by the capturing unit 21 is provided to the captured image acquisition unit 111 in a communication method corresponding to the first communication control unit 11. The captured image is temporarily stored in the storage unit of the robot 2, and the captured image acquisition unit 111 acquires the captured image temporarily stored in real time or at a predetermined communication interval.
 空間データ提供部112は、ロボット2に対して空間データ生成部13において生成された空間データを提供する。空間データは、ロボット2が存在している空間において、ロボットが認識している空間要素をデータ化したものである。空間データとは、たとえば、部屋の間取りを示す地図情報と、家具などの空間要素(オブジェクト)の形状と配置を示す空間要素情報の組み合わせとして表現される。ロボット2は、空間データに定められた範囲内において移動することができる。すなわち、空間データはロボット2において移動可能範囲を定めるための地図として機能する。ロボット2は、空間データ提供部112から空間データを提供される。例えば、空間データには、ロボット2が移動できない壁、家具、電化製品、段差等の空間要素の位置データを含めることができる。ロボット2は、提供された空間データに基づき、自身が移動できる場所か否かの判断をすることができる。また、ロボット2は、空間データの中に未生成の範囲が含まれるか否かを認識できるようにしてもよい。未生成の範囲が含まれるか否かは、例えば、空間データの一部に空間要素がない空間が含まれているか否かで判断することができる。 The spatial data providing unit 112 provides the spatial data generated by the spatial data generating unit 13 to the robot 2. Spatial data is data obtained by converting spatial elements recognized by the robot in the space where the robot 2 exists. Spatial data is expressed as, for example, a combination of map information indicating a room layout and spatial element information indicating the shape and arrangement of a spatial element (object) such as furniture. The robot 2 can move within a range determined by the spatial data. That is, the spatial data functions as a map for determining the movable range in the robot 2. The robot 2 is provided with spatial data from the spatial data providing unit 112. For example, the spatial data can include position data of spatial elements such as walls, furniture, electrical appliances, and steps that the robot 2 cannot move. The robot 2 can determine whether or not the robot 2 can move based on the provided spatial data. Further, the robot 2 may be able to recognize whether or not an ungenerated range is included in the spatial data. Whether or not an ungenerated range is included can be determined, for example, based on whether or not a space having no spatial element is included in part of the spatial data.
 指示部113は、ロボット2に対して、空間データ生成部13において生成された空間データに基づく撮影を指示する。空間データ生成部13は、撮影画像取得部111において取得された撮影画像に基づき空間データを作成するため、例えば、室内の空間データを作成する場合、撮影されていない部分については空間データが未作成の部分を含む場合がある。また、撮影画像が不鮮明等であると、作成された空間データにノイズが含まれてしまい空間データに不正確な部分が含まれてしまう場合がある。指示部113は、空間データに未生成の部分がある場合、未生成の部分についての撮影指示をするようにしてもよい。また、指示部113は、空間データが不正確な部分が含まれている場合、不正確な部分についての撮影指示をするようにしてもよい。指示部113は、空間データに基づき、自発的に撮影を指示してもよい。なお、指示部113は、空間データに基づき生成された可視化データ(後述)を確認した利用者からの明示的な指示に基づき、撮影を指示してもよい。利用者は、可視化データに含まれる領域を指定して、ロボット2に対して、撮影を指示することにより、空間を認識して空間データを生成させることができる。 The instruction unit 113 instructs the robot 2 to shoot based on the spatial data generated by the spatial data generation unit 13. Since the spatial data generation unit 13 creates spatial data based on the captured image acquired by the captured image acquisition unit 111, for example, when creating indoor spatial data, spatial data is not generated for a portion that has not been captured. May be included. Further, if the captured image is unclear, noise may be included in the created spatial data, and an inaccurate part may be included in the spatial data. When there is an ungenerated part in the spatial data, the instructing unit 113 may issue an imaging instruction for the ungenerated part. In addition, when the spatial data includes an inaccurate portion, the instructing unit 113 may instruct the imaging for the inaccurate portion. The instruction unit 113 may voluntarily instruct photographing based on the spatial data. Note that the instruction unit 113 may instruct photographing based on an explicit instruction from a user who has confirmed visualization data (described later) generated based on the spatial data. The user can recognize the space and generate the spatial data by designating the area included in the visualization data and instructing the robot 2 to perform photographing.
 点群データ生成部12は、撮影画像取得部111において取得された撮影画像に基づき空間要素の三次元の点群データを生成する。点群データ生成部12は、撮影画像に含まれる空間要素を所定の空間における三次元の点の集合に変換して点群データを生成する。空間要素は、上述のように、部屋の壁、段差、扉、部屋に置いてある家具、家電、荷物、観葉植物等である。点群データ生成部12は、空間要素の撮影画像に基づき点群データを生成するため、点群データは撮影された空間要素の表面の形状を表すことになる。撮影画像は、ロボット2の撮影部21が、所定の撮影位置において所定の撮影角度で撮影することにより生成される。したがって、ロボット2が正面の位置から家具等の空間要素を撮影した場合、撮影されていない家具の裏側等の形状については点群データを生成することができず、家具の裏側にロボット2が移動可能な空間があったとしても、ロボット2はそれを認識することができない。一方、ロボット2が移動して側面の撮影位置から家具を撮影すると、家具等の空間要素の裏側の形状について点群データを生成することができ、ひいては、ロボット2が空間を比較的正確に把握することが可能となる。 The point cloud data generation unit 12 generates three-dimensional point cloud data of spatial elements based on the captured image acquired by the captured image acquisition unit 111. The point cloud data generation unit 12 generates point cloud data by converting a spatial element included in the captured image into a three-dimensional set of points in a predetermined space. As described above, the spatial elements are room walls, steps, doors, furniture placed in the room, home appliances, luggage, houseplants, and the like. Since the point cloud data generation unit 12 generates point cloud data based on the captured image of the spatial element, the point cloud data represents the shape of the surface of the captured spatial element. The photographed image is generated by the photographing unit 21 of the robot 2 photographing at a predetermined photographing angle at a predetermined photographing position. Therefore, when the robot 2 images a spatial element such as furniture from the front position, point cloud data cannot be generated for the shape of the back side of the furniture that has not been imaged, and the robot 2 moves to the back side of the furniture. Even if there is a possible space, the robot 2 cannot recognize it. On the other hand, when the robot 2 moves and photographs the furniture from the side photographing position, point cloud data can be generated for the shape of the back side of the space element such as furniture, and the robot 2 grasps the space relatively accurately. It becomes possible to do.
 空間データ生成部13は、点群データ生成部12において生成された空間要素の点群データに基づきロボット2の移動可能範囲を定める空間データを生成する。空間データは空間における点群データに基づき生成されるため、空間データに含まれる空間要素に関しても三次元の座標情報を有している。座標情報には、点の位置、長さ(高さを含む)、面積、または体積の情報が含まれていてもよい。ロボット2は、生成された空間データに含まれる空間要素の位置情報に基づき、移動が可能な範囲を判断することが可能となる。例えば、ロボット2が床面を水平移動する移動機構29を有するものである場合、ロボット2は、空間データにおいて空間要素である床面からの段差が所定の高さ以上(例えば、1cm以上)である場合移動が不可能であると判断することができる。一方、空間データにおいて、空間要素であるテーブルの天板またはベッド等が床面から所定の高さを有する場合、ロボット2は、床面からの高さが所定の高さ以上(例えば、60cm以上)の範囲を自身の高さとのクリアランスを考慮して移動可能な範囲として判断する。また、ロボット2は、空間データにおいて、空間要素である壁と家具の隙間が所定の幅以上(例えば、40cm以上)である範囲を自身の幅とのクリアランスを考慮して移動可能な範囲として判断する。 The spatial data generation unit 13 generates spatial data that determines the movable range of the robot 2 based on the point cloud data of the spatial elements generated by the point cloud data generation unit 12. Since the spatial data is generated based on the point cloud data in the space, the spatial element included in the spatial data also has three-dimensional coordinate information. The coordinate information may include point position, length (including height), area, or volume information. The robot 2 can determine a movable range based on the position information of the spatial elements included in the generated spatial data. For example, when the robot 2 has a moving mechanism 29 that horizontally moves on the floor surface, the robot 2 has a level difference from the floor surface that is a spatial element in the spatial data at a predetermined height or higher (for example, 1 cm or higher). In some cases, it can be determined that movement is impossible. On the other hand, in the spatial data, when the table top plate or bed, which is a spatial element, has a predetermined height from the floor surface, the robot 2 has a height from the floor surface of a predetermined height or higher (for example, 60 cm or higher). ) Is determined as a movable range in consideration of the clearance with its own height. In addition, the robot 2 determines, in the spatial data, a range in which a space between the wall and the furniture, which is a spatial element, is a predetermined width or more (for example, 40 cm or more) as a movable range in consideration of a clearance with its own width. To do.
 空間データ生成部13は、空間における所定のエリアについて属性情報を設定してもよい。属性情報とは、所定のエリアについてロボット2の移動条件を定めた情報である。移動条件とは、例えば、ロボット2が移動可能な空間要素とのクリアランスを定めた条件である。例えば、ロボット2が移動可能な通常の移動条件が、クリアランスが30cm以上である場合、所定のエリアについてのクリアランスが5cm以上とした属性情報を設定することができる。また、属性情報において設定する移動条件は、ロボットの移動を制限する情報を設定してもよい。移動の制限とは、例えば、移動速度の制限、または進入の禁止等である。例えば、クリアランスが小さいエリアや人が存在しているエリアにおいて、ロボット2の移動速度を落とした属性情報を設定してもよい。また、属性情報において設定する移動条件は、エリアの床材によって定められるものであってもよい。例えば、属性情報は、床がクッションフロア、フローリング、畳、またはカーペットにおいて移動機構29の動作(走行速度または走行手段等)の変更を設定するものであってもよい。また、属性情報には、ロボット2が移動して充電できる充電スポット、ロボット2の姿勢が不安定になるため移動が制限される段差またはカーペットの端等における以上条件を設定できるようにしてもよい。なお、属性情報を設定したエリアは、後述する可視化データにおいて表示方法を変更する等、利用者が把握できるようにしてもよい。 The spatial data generation unit 13 may set attribute information for a predetermined area in the space. The attribute information is information that defines the movement condition of the robot 2 for a predetermined area. The movement condition is, for example, a condition that defines a clearance from a space element that the robot 2 can move. For example, when the normal movement condition in which the robot 2 can move is that the clearance is 30 cm or more, attribute information in which the clearance for a predetermined area is 5 cm or more can be set. In addition, as the movement condition set in the attribute information, information for restricting the movement of the robot may be set. The movement restriction is, for example, movement speed restriction or entry prohibition. For example, attribute information that reduces the moving speed of the robot 2 may be set in an area where the clearance is small or an area where people exist. Further, the movement condition set in the attribute information may be determined by the floor material of the area. For example, the attribute information may be set to change the operation (traveling speed or traveling means) of the moving mechanism 29 when the floor is a cushion floor, flooring, tatami mat, or carpet. In the attribute information, the above conditions may be set at the charging spot where the robot 2 can move and charge, the step where the movement of the robot 2 is unstable and the movement is restricted, or the end of the carpet. . Note that the area in which the attribute information is set may be understood by the user, for example, by changing the display method in the visualization data described later.
 空間データ生成部13は、点群データ生成部12において生成された点群データを、例えば、ハフ変換して、点群データにおいて共通する直線や曲線等の図形を抽出し、抽出された図形により表現される空間要素の輪郭によって空間データを生成する。ハフ変換は、点群データを特徴点とした場合、特徴点を最も多く通過する図形を抽出する座標変換方法である。点群データは、部屋に置いてある家具等の空間要素の形状を点群において表現するものであるため、利用者は、点群データで表現される空間要素が何なのかの判別(例えば、テーブル、椅子、壁等の認識)をするのが困難な場合がある。空間データ生成部13は、点群データをハフ変換することにより、家具等の輪郭を表現することができるので、利用者が空間要素を判別しやすくすることができる。なお、空間データ生成部13は、点群データ生成部12において生成された点群データを、画像認識において認識された空間要素(例えば、テーブル、椅子、壁等)における基本形状に変換して空間データを生成してもよい。テーブル等の空間要素は、画像認識でテーブルであることが認識されることにより、空間要素の一部の点群データ(例えば、テーブルを正面から見たときの点群データ)からテーブルの形状を正確に予測することができる。空間データ生成部13は、点群データと画像認識を組み合わせることにより、空間要素を正確に把握した空間データを生成することが可能となる。
 基本形状は、空間要素と対応付けられた所定の形状(例えば、四角、丸、三角、直方体、円錐など)であってもよい。空間データ生成部13は、基本形状を拡大又は縮小等の変形をすることによって空間データを生成してもよい。空間データにおいて、複数個または複数種類の基本形状のオブジェクトを組み合わせることにより、空間要素の形状が表現されてもよい。また、後述する可視化データ生成部14は、これらの基本形状又は変形等の処理後の空間データに対して、空間要素と対応付けられたテクスチャを合成することにより、空間要素の特徴を表現した可視化データを生成してもよい。
The spatial data generation unit 13 performs, for example, a Hough transform on the point cloud data generated by the point cloud data generation unit 12 to extract a common line, curve, or other graphic in the point cloud data. Spatial data is generated by the contour of the spatial element to be expressed. The Hough transform is a coordinate transformation method for extracting a figure that passes through the feature points most when the point cloud data is a feature point. Since the point cloud data expresses the shape of a spatial element such as furniture placed in a room in the point cloud, the user can determine what the spatial element is represented by the point cloud data (for example, Recognition of tables, chairs, walls, etc.) may be difficult. Since the spatial data generation unit 13 can express the outline of furniture or the like by performing the Hough transform on the point cloud data, the user can easily determine the spatial elements. The spatial data generation unit 13 converts the point cloud data generated by the point cloud data generation unit 12 into a basic shape in a spatial element (for example, a table, a chair, a wall, etc.) recognized in the image recognition. Data may be generated. By recognizing that a spatial element such as a table is a table by image recognition, the shape of the table is determined from a part of point cloud data of the spatial element (for example, point cloud data when the table is viewed from the front). It can be predicted accurately. The spatial data generation unit 13 can generate spatial data that accurately grasps the spatial elements by combining point cloud data and image recognition.
The basic shape may be a predetermined shape (for example, a square, a circle, a triangle, a rectangular parallelepiped, a cone, or the like) associated with a spatial element. The spatial data generation unit 13 may generate spatial data by deforming the basic shape by enlarging or reducing the basic shape. In the spatial data, the shape of the spatial element may be expressed by combining plural or plural types of basic shape objects. In addition, the visualization data generation unit 14 to be described later visualizes the feature of the spatial element by synthesizing the texture associated with the spatial element with the spatial data after processing such as the basic shape or deformation. Data may be generated.
 空間データ生成部13は、ロボット2の移動した位置から所定の範囲に含まれる点群データに基づき空間データを生成する。ロボット2の移動した位置からの所定の範囲とは、ロボット2が実際に移動した位置を含み、例えば、ロボット2が移動した位置から30cm等の距離にある範囲であってもよい。点群データは、ロボット2の撮影部21により撮影された撮影画像に基づき生成されるため、撮影画像にはロボット2から離れた位置にある空間要素が含まれる場合がある。撮影部21から空間要素までが離れている場合、撮影されていない部分が存在し、または撮影されていない障害物の存在によって実際にはロボット2が移動できない範囲が存在する場合がある。また、廊下等のように撮影部21から遠い位置にある空間要素が撮影画像に含まれる場合、特徴点において抽出された空間要素が歪んでしまう場合がある。また、撮影距離が大きい場合、撮影画像に含まれる空間要素が小さくなるため、点群データの精度が低くなる場合がある。空間データ生成部13は、所定距離以上大きく離れている特徴点を無視することにより、精度が低い空間要素、または歪んだ空間要素を含まない空間データを生成するようにしてもよい。空間データ生成部13は、ロボット2の移動した位置から所定の範囲の外側にある点群データを削除して空間データを生成することにより、実際にはデータが存在しない飛び地が発生することを防ぎ、ロボット2が移動できない範囲を含まず、またデータ精度の高い空間データを生成することが可能となる。また、空間データから生成される可視化データにおいて飛び地状の描画を防ぐことができ、視認性を向上させることができる。 The spatial data generation unit 13 generates spatial data based on point cloud data included in a predetermined range from the position where the robot 2 has moved. The predetermined range from the position to which the robot 2 has moved includes the position where the robot 2 has actually moved, and may be, for example, a range at a distance such as 30 cm from the position to which the robot 2 has moved. Since the point cloud data is generated based on the captured image captured by the capturing unit 21 of the robot 2, the captured image may include a spatial element at a position away from the robot 2. When the imaging unit 21 is far away from the space element, there may be a portion where the robot 2 is not moved due to the presence of an uncaptured portion or the presence of an obstacle that is not captured. In addition, when a captured image includes a spatial element that is far from the imaging unit 21 such as a hallway, the spatial element extracted at the feature point may be distorted. In addition, when the shooting distance is large, the spatial elements included in the shot image are small, and the accuracy of the point cloud data may be low. The spatial data generation unit 13 may generate spatial data that does not include a spatial element with low accuracy or a distorted spatial element by ignoring feature points that are greatly separated by a predetermined distance or more. The spatial data generation unit 13 deletes point cloud data outside a predetermined range from the position where the robot 2 has moved to generate spatial data, thereby preventing the occurrence of an enclave where no data actually exists. In addition, it is possible to generate spatial data that does not include a range in which the robot 2 cannot move and has high data accuracy. Further, it is possible to prevent enclave drawing in the visualization data generated from the spatial data, and to improve visibility.
 可視化データ生成部14は、空間データ生成部13において生成された空間データに基づいて、空間に含まれる空間要素を人が直観的に判別できるように可視化した可視化データを生成する。空間データは、上述の通り、自律行動型ロボット1が認識している空間要素を含むデータであるのに対して、可視化データは、自律行動型ロボット1が認識している空間要素を利用者が視認するためのデータである。空間データには、誤認識された空間要素が含まれる場合がある。空間データを可視化することにより、自律行動型ロボット1における空間要素の認識状態(誤認識の有無等)を人が確認し易くなる。 The visualization data generation unit 14 generates visualization data that is visualized based on the spatial data generated by the spatial data generation unit 13 so that a person can intuitively determine the spatial elements included in the space. As described above, the spatial data is data including the spatial elements recognized by the autonomous behavior robot 1, whereas the visualization data is the spatial data recognized by the autonomous behavior robot 1 by the user. This is data for visual recognition. Spatial data may include misrecognized spatial elements. By visualizing the spatial data, it becomes easy for a person to confirm the recognition state (presence / absence of misrecognition) of the spatial element in the autonomous behavior robot 1.
 可視化データは、表示装置において表示可能なデータである。可視化データは、いわゆる間取りであり、壁として認識された空間要素に囲まれた領域の中に、テーブル、椅子、ソファー等として認識された空間要素が含まれる。可視化データ生成部14は、ハフ変換によって抽出された図形において形成される家具等の形状を、例えばRGBデータで表現される可視化データとして生成する。空間データ生成部13は、空間要素の三次元における平面の方向に基づき、平面の描画方法を変更した可視化データを生成する。空間要素の三次元における平面の方向とは、例えば、点群データ生成部12において生成された点群データをハフ変換して、点群データにおいて生成された図形で形成される平面の法線方向である。可視化データ生成部14は、法線方向に応じて平面の描画方法を変更した可視化データを生成する。描画方法とは、例えば、平面に付与する色相、明度または彩度等の色属性、平面に付与する模様、またはテクスチャ等である。例えば、可視化データ生成部14は、平面の法線が垂直方向(平面が水平方向)である場合、平面の明度を高くして明るい色で描画する。一方、可視化データ生成部14は、平面の法線が水平方向(平面が垂直方向)である場合、平面の明度を低くして暗い色で描画する。平面の描画方法を変更することにより、家具等の形状を立体的に表現することが可能となり、利用者が家具等の形状を確認しやすくすることができる。また、可視化データは、空間データに含まれる各空間要素の座標情報と対応づけられた可視化データにおける座標情報(「可視化座標情報」という。)を含んでいてもよい。可視化座標情報は、座標情報と対応付けられているため、可視化座標情報における点は実際の空間における点に対応し、また、可視化座標情報における面は実際の空間における面に対応している。したがって、利用者が可視化データにおいてある点の位置を特定すると、それに対応した実際の部屋における点の位置が特定できることになる。また、座標系を変換するための変換関数を用意し、可視化データにおける座標系と、空間データにおける座標系とを相互に変換できるようにしてもよい。もちろん、可視化データにおける座標系と、実際の空間における座標系とを相互に変換できるようにしてもよい。 Visualized data is data that can be displayed on the display device. The visualization data is a so-called floor plan, and a spatial element recognized as a table, chair, sofa, or the like is included in an area surrounded by a spatial element recognized as a wall. The visualization data generation unit 14 generates the shape of furniture or the like formed in the graphic extracted by the Hough transform as visualization data expressed by RGB data, for example. The spatial data generation unit 13 generates visualization data in which the plane drawing method is changed based on the direction of the plane in three dimensions of the spatial element. The direction of the three-dimensional plane of the spatial element is, for example, the normal direction of the plane formed by the figure generated in the point cloud data by Hough transforming the point cloud data generated in the point cloud data generation unit 12 It is. The visualization data generation unit 14 generates visualization data in which the plane drawing method is changed according to the normal direction. The drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like. For example, when the normal line of the plane is the vertical direction (the plane is the horizontal direction), the visualization data generation unit 14 renders the plane with high brightness and draws in a bright color. On the other hand, when the normal of the plane is the horizontal direction (the plane is the vertical direction), the visualization data generation unit 14 renders the plane with low brightness and draws in a dark color. By changing the drawing method of the plane, the shape of the furniture or the like can be represented in three dimensions, and the user can easily confirm the shape of the furniture or the like. The visualization data may include coordinate information (referred to as “visualization coordinate information”) in the visualization data associated with the coordinate information of each spatial element included in the spatial data. Since the visualization coordinate information is associated with the coordinate information, the point in the visualization coordinate information corresponds to the point in the actual space, and the surface in the visualization coordinate information corresponds to the surface in the actual space. Therefore, when the user specifies the position of a certain point in the visualization data, the position of the point in the actual room corresponding to the point can be specified. In addition, a conversion function for converting the coordinate system may be prepared so that the coordinate system in the visualization data and the coordinate system in the spatial data can be mutually converted. Of course, the coordinate system in the visualization data and the coordinate system in the actual space may be mutually convertible.
 可視化データ生成部14は、可視化データを立体的(3D(Dimensions))データで生成する。また、可視化データ生成部14は、可視化データを平面的(2D)データで生成してもよい。可視化データを3Dで生成することにより、利用者が家具等の形状を確認しやすくすることができる。可視化データ生成部14は、空間データ生成部13において、可視化データを3Dで生成するために十分なデータが生成された場合に可視化データを3Dで生成するようにしてもよい。可視化データ生成部14は、利用者によって指定された3Dの視点位置(視点高さ、視点仰俯角等)によって可視化データを3Dで生成するようにしてもよい。視点位置を指定可能とすることにより、利用者が家具等の形状を確認しやすくすることができる。また、可視化データ生成部14は、部屋の壁または天井については、奥側の壁についてのみ着色し、手前側の壁または天井を透明にした(着色しない)可視化データを生成してもよい。手前側の壁を透明にすることにより、利用者が手前側の壁の先(室内)に配置された家具等の形状を確認しやすくすることができる。 The visualization data generation unit 14 generates visualization data as stereoscopic (3D (Dimensions)) data. The visualization data generation unit 14 may generate the visualization data as planar (2D) data. By generating the visualization data in 3D, the user can easily check the shape of the furniture or the like. The visualization data generation unit 14 may generate the visualization data in 3D when the spatial data generation unit 13 generates sufficient data to generate the visualization data in 3D. The visualization data generation unit 14 may generate the visualization data in 3D based on the 3D viewpoint position (viewpoint height, viewpoint elevation angle, etc.) designated by the user. By making it possible to specify the viewpoint position, the user can easily check the shape of furniture or the like. Further, the visualization data generation unit 14 may generate visualization data in which the wall or ceiling of the room is colored only for the back wall and the front wall or ceiling is transparent (not colored). By making the near wall transparent, the user can easily confirm the shape of the furniture or the like arranged at the end (inside the room) of the near wall.
 可視化データ生成部14は、撮影画像取得部111において取得された撮影画像に応じた色属性を付与した可視化データを生成する。例えば、可視化データ生成部14は、撮影画像に木目調の家具が含まれ、木目の色(例えば、茶色)を検出した場合、抽出された家具の図形に検出した色に近似した色を付与した可視化データを生成する。撮影画像に応じた色属性を付与することにより、利用者が家具等の種別を確認しやすくすることができる。 The visualization data generation unit 14 generates visualization data to which a color attribute corresponding to the captured image acquired by the captured image acquisition unit 111 is added. For example, when the captured image includes woodgrain furniture and the color of the wood (for example, brown) is detected, the visualization data generation unit 14 gives a color approximate to the detected color to the extracted furniture figure. Generate visualization data. By assigning a color attribute according to the photographed image, the user can easily check the type of furniture or the like.
 可視化データ生成部14は、固定されている固定物と、移動する移動物との描画方法を変更した可視化データを生成する。固定物とは、例えば、部屋の壁、段差、固定されている家具等である。移動物とは、例えば、椅子、ごみ箱、キャスター付き家具等である。また、移動物には、例えば、荷物やカバン等の一時的に床に置かれた一時物を含んでいてもよい。描画方法とは、例えば、平面に付与する色相、明度または彩度等の色属性、平面に付与する模様、またはテクスチャ等である。 The visualization data generation unit 14 generates visualization data in which the drawing method between the fixed object that is fixed and the moving object that moves is changed. The fixed object is, for example, a wall of a room, a step, furniture that is fixed, and the like. The moving object is, for example, a chair, a trash can, furniture with casters, or the like. Moreover, the moving object may include a temporary object temporarily placed on the floor, such as a luggage or a bag. The drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like.
 固定物、移動物または一時物の区分は、その場所に存在している期間によって識別することができる。例えば、空間データ生成部13は、点群データ生成部12において生成された点群データの経時的な変化に基づき、空間要素が固定物、移動物または一時物の区分を識別して空間データを生成する。空間データ生成部13は、例えば、第1の時刻において生成した空間データと、第2の時刻において生成した空間データの差分から、空間要素が変化していない場合に空間要素が固定物であると判断する。また、空間データ生成部13は、空間データの差分から、空間要素の位置が変化している場合に空間要素が移動物であると判断してもよい。また、空間データ生成部13は、空間データの差分から、空間要素が無くなっている場合または出現した場合に空間要素が一時物であると判断してもよい。可視化データ生成部14は、空間データ生成部13において識別された区分に基づき描画方法を変更する。描画方法の変更とは、例えば、色分け、ハッチングの追加または所定のマークの追加等である。例えば、空間データ生成部13は、固定物を黒で表示し、移動物を青で表示し、または、一時物を黄で表示するようにしてもよい。空間データ生成部13は、固定物、移動物または一時物の区分を識別して空間データを生成する。可視化データ生成部14は、空間データ生成部13において識別された区分に基づき描画方法を変更した可視化データを生成してもよい。また、空間データ生成部13は、画像認識で認識された空間要素の描画方法を変更した可視化データを生成してもよい。  The classification of fixed, moving or temporary items can be identified by the time period existing at the location. For example, the spatial data generation unit 13 identifies the classification of the fixed object, the moving object, or the temporary object based on the time-dependent change of the point cloud data generated by the point cloud data generation unit 12, and obtains the spatial data. Generate. For example, the spatial data generation unit 13 determines that the spatial element is a fixed object when the spatial element has not changed from the difference between the spatial data generated at the first time and the spatial data generated at the second time. to decide. Further, the spatial data generation unit 13 may determine that the spatial element is a moving object when the position of the spatial element changes from the difference of the spatial data. In addition, the spatial data generation unit 13 may determine that the spatial element is a temporary object when the spatial element disappears or appears from the difference of the spatial data. The visualization data generation unit 14 changes the drawing method based on the classification identified by the spatial data generation unit 13. The drawing method change includes, for example, color coding, addition of hatching, addition of a predetermined mark, and the like. For example, the spatial data generation unit 13 may display a fixed object in black, a moving object in blue, or a temporary object in yellow. The spatial data generation unit 13 generates spatial data by identifying a classification of a fixed object, a moving object, or a temporary object. The visualization data generation unit 14 may generate visualization data in which the drawing method is changed based on the classification identified by the spatial data generation unit 13. Further, the spatial data generation unit 13 may generate visualization data obtained by changing the drawing method of the spatial element recognized by the image recognition.
 可視化データ生成部14は、複数に区分されたエリアにおける可視化データを生成することができる。例えば、可視化データ生成部14は、リビングルーム、寝室、ダイニングルーム、廊下等の壁で仕切られた空間をひとつの部屋としてそれぞれ可視化データを生成する。部屋毎に可視化データを生成することにより、例えば、空間データまたは可視化データの生成を部屋ごとに分けて行うことが可能となり、空間データ等の生成が容易になる。また、ロボット2が移動する可能性があるエリアのみについて空間データ等を作成することが可能となる。可視化データ提供部161は、利用者がエリアを選択可能な可視化データを提供する。可視化データ提供部161は、例えば、利用者が選択したエリアの可視化データを拡大して、または利用者が選択したエリアの詳細な可視化データ提供するようにしてもよい。 The visualization data generation unit 14 can generate visualization data in a plurality of divided areas. For example, the visualization data generation unit 14 generates visualization data for each of the spaces partitioned by walls such as a living room, a bedroom, a dining room, and a hallway as one room. By generating visualization data for each room, for example, generation of spatial data or visualization data can be performed separately for each room, and generation of spatial data or the like is facilitated. In addition, it is possible to create spatial data or the like only for an area where the robot 2 may move. The visualization data providing unit 161 provides visualization data that allows the user to select an area. For example, the visualization data providing unit 161 may enlarge the visualization data of the area selected by the user or provide detailed visualization data of the area selected by the user.
 撮影対象認識部15は、撮影画像取得部111において取得された撮影画像に基づき、空間要素を画像認識する。空間要素の認識は、例えば機械学習において蓄積された画像認識結果に基づき空間要素が何であるかを判断する画像認識エンジンを用いることにより実行することができる。空間要素の画像認識は、例えば、空間要素の形状、色、模様等において、認識することができる。撮影対象認識部15は、例えば図示しないクラウドサーバにおいて提供される画像認識サービスを利用することにより空間要素を画像認識できるようにしてもよい。可視化データ生成部14は、撮影対象認識部15において画像認識された空間要素に応じて描画方法を変更した可視化データを生成する。例えば、画像認識された空間要素がソファーであった場合、可視化データ生成部14は、空間要素に布の質感を有するテクスチャを付与した可視化データを生成する。また、画像認識された空間要素が壁であった場合、可視化データ生成部14は、壁紙の色属性(例えば白色)を付与した可視化データを生成してもよい。このような可視化処理を施すことで、利用者はロボット2における空間の認識状態を直観的に把握できる。 The imaging target recognition unit 15 recognizes the spatial element based on the captured image acquired by the captured image acquisition unit 111. Spatial element recognition can be performed by using an image recognition engine that determines what a spatial element is based on, for example, image recognition results accumulated in machine learning. The spatial element image recognition can be recognized, for example, in the shape, color, pattern, or the like of the spatial element. For example, the imaging target recognition unit 15 may be able to recognize a spatial element by using an image recognition service provided in a cloud server (not shown). The visualization data generation unit 14 generates visualization data in which the drawing method is changed in accordance with the spatial element recognized by the imaging target recognition unit 15. For example, when the image-recognized space element is a sofa, the visualization data generation unit 14 generates visualization data in which a texture having a cloth texture is added to the space element. When the image-recognized space element is a wall, the visualization data generation unit 14 may generate visualization data with a wallpaper color attribute (for example, white). By performing such visualization processing, the user can intuitively grasp the recognition state of the space in the robot 2.
 第2通信制御部16は、利用者が所有する利用者端末3との通信を制御する。利用者端末3は、例えば、スマートフォン、タブレットPC、ノートPC、デスクトップPC等である。利用者端末3との通信方式は任意であり、例えば、無線LAN、Bluetooth(登録商標)、または赤外線通信等の近距離無線通信、もしくは有線通信を用いることができる。第2通信制御部16が有する、可視化データ提供部161および指定取得部162の各機能は、第2通信制御部16において制御される通信機能を用いて利用者端末3と通信する。 The second communication control unit 16 controls communication with the user terminal 3 owned by the user. The user terminal 3 is, for example, a smartphone, a tablet PC, a notebook PC, a desktop PC, or the like. The communication method with the user terminal 3 is arbitrary, and for example, wireless LAN, Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used. Each function of the visualization data providing unit 161 and the designation acquiring unit 162 included in the second communication control unit 16 communicates with the user terminal 3 using a communication function controlled by the second communication control unit 16.
 可視化データ提供部161は、可視化データ生成部14において生成された可視化データを利用者端末3に対して提供する。可視化データ提供部161は、例えば、Webサーバであり、利用者端末3のブラウザに対してWebページとして可視化データを提供する。可視化データ提供部161は、複数の利用者端末3に対して可視化データを提供するようにしてもよい。利用者は利用者端末3に表示された可視化データを視認することにより、ロボット2が移動可能な範囲を2D又は3Dの表示として確認することができる。可視化データには、家具等の形状が所定の描画方法において描画されている。利用者は利用者端末3を操作することにより、例えば、2D表示と3D表示の切り替え、可視化データのズームインもしくはズームアウト、または3D表示における視点の移動を行うことができる。 The visualization data providing unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3. The visualization data providing unit 161 is, for example, a Web server, and provides visualization data as a Web page to the browser of the user terminal 3. The visualization data providing unit 161 may provide visualization data to a plurality of user terminals 3. By visually recognizing the visualization data displayed on the user terminal 3, the user can confirm the range in which the robot 2 can move as a 2D or 3D display. In the visualization data, the shape of furniture or the like is drawn by a predetermined drawing method. By operating the user terminal 3, the user can switch between 2D display and 3D display, zoom in or out of the visualization data, or move the viewpoint in 3D display, for example.
 利用者は、利用者端末3に表示された可視化データを視認し、空間データの生成状態やエリアの属性情報を確認することができる。利用者は、可視化データの中から空間データが生成されていない領域を利用者端末3を介して指定して、空間データの作成を指示することができる。また、利用者は、利用者端末3に表示された可視化データを視認し、家具等の空間要素の形状が不自然である等、空間データが不正確であると思われる領域があれば、その領域を利用者端末3を介して指定して、空間データの再生成を指示することができる。上述のように、可視化データにおける可視化座標情報は、空間データの座標情報と対応付けられているため、利用者によって再生成が指定された可視化データにおける領域は、空間データにおける領域に一意に特定できる。再生成された空間データに基づいて可視化データ生成部14において可視化データが再生成されて可視化データ提供部161から提供される。例えば、指定の規則に従って空間データが対応する可視化データに変換されることにより、空間データに基づいて可視化データが再生成されうる。なお、再生成された可視化データにおいても空間要素が誤認識されている等、空間データの生成状態が変化しない場合がある。その場合、利用者は、利用者端末3を介してロボット2の動作パラメータを変化させる指示を行うことにより、空間データの生成を指示するようにしてもよい。動作パラメータとは、例えば、ロボット2における撮影部21における撮影条件(露光量またはシャッター速度等)、図示しないセンサの感度、ロボット2の移動を許可する際のクリアランス条件等である。動作パラメータは、例えばエリアの属性情報として空間データに含めるようにしてもよい。 The user can visually check the visualization data displayed on the user terminal 3 and can confirm the generation state of the spatial data and the attribute information of the area. The user can instruct the creation of the spatial data by designating the area in which the spatial data is not generated from the visualization data via the user terminal 3. In addition, if the user visually recognizes the visualization data displayed on the user terminal 3 and there is an area where the spatial data seems to be inaccurate, such as the shape of a spatial element such as furniture is unnatural, An area can be designated via the user terminal 3 to instruct the regeneration of the spatial data. As described above, since the visualization coordinate information in the visualization data is associated with the coordinate information in the spatial data, the area in the visualization data that is specified to be regenerated by the user can be uniquely identified as the area in the spatial data. . Based on the regenerated spatial data, the visualization data generating unit 14 regenerates the visualization data and provides it from the visualization data providing unit 161. For example, the visualization data can be regenerated based on the spatial data by converting the spatial data into corresponding visualization data according to a specified rule. Note that the generation state of the spatial data may not change, for example, the spatial element may be misrecognized in the regenerated visualization data. In that case, the user may instruct the generation of spatial data by giving an instruction to change the operation parameter of the robot 2 via the user terminal 3. The operation parameters are, for example, shooting conditions (exposure amount, shutter speed, etc.) in the shooting unit 21 in the robot 2, sensitivity of a sensor (not shown), clearance conditions when allowing the robot 2 to move, and the like. For example, the operation parameter may be included in the spatial data as area attribute information.
 可視化データ生成部14は、例えば、空間データの作成(「再作成」を含む。)を指示するボタンの表示を含む可視化データを生成する。利用者端末3は、表示されたボタンを利用者が操作することにより、自律行動型ロボット1に対して空間データの作成の指示を送信することができる。利用者端末3から送信された空間データの作成指示は、指定取得部162において取得される。 The visualization data generation unit 14 generates visualization data including a display of a button for instructing creation of spatial data (including “re-creation”), for example. The user terminal 3 can transmit an instruction to create spatial data to the autonomous behavior robot 1 by the user operating the displayed button. The designation acquisition unit 162 acquires the spatial data creation instruction transmitted from the user terminal 3.
 指定取得部162は、可視化データ提供部161において提供された可視化データに基づき利用者に指定された領域の空間データの作成の指示を取得する。指定取得部162は、エリアの属性情報を設定(変更を含む)する指示を取得してもよい。また、指定取得部162は、領域の位置と、ロボットがその領域にアプローチする際の方向、つまり撮影すべき方向を取得する。作成の指示の取得は、例えば可視化データ提供部161において提供されたWebページの操作において実行することができる。これにより、利用者は、ロボット2がどのように空間を認識しているのかを把握し、認識状態に応じて、認識処理のやり直しをロボット2に指示することができる。 The designation obtaining unit 162 obtains an instruction to create spatial data of an area designated by the user based on the visualization data provided by the visualization data providing unit 161. The designation acquisition unit 162 may acquire an instruction to set (including change) the attribute information of the area. In addition, the designation acquisition unit 162 acquires the position of the area and the direction when the robot approaches the area, that is, the direction to be photographed. Acquisition of a creation instruction can be executed, for example, in the operation of a Web page provided by the visualization data providing unit 161. Thereby, the user can grasp how the robot 2 recognizes the space, and can instruct the robot 2 to redo the recognition process according to the recognition state.
 指示部113は、空間データの作成が指示された領域における撮影をロボット2に対して指示する。空間データの作成が指示された領域における撮影は、例えば、ロボット2(撮影部21)の座標位置、撮影部21の撮影方向、解像度等の撮影条件を含んでいてもよい。空間データ生成部13は、作成が指示された空間データが未生成の領域に関するものである場合、既存の空間データに新たに作成された空間データを追加し、空間データ生成部13は、作成が指示された空間データが再作成に係るものである場合、既存の空間データを更新した空間データを生成する。 The instruction unit 113 instructs the robot 2 to shoot in the area where the creation of spatial data is instructed. The shooting in the area instructed to create the spatial data may include, for example, shooting conditions such as the coordinate position of the robot 2 (shooting unit 21), the shooting direction of the shooting unit 21, and the resolution. The spatial data generation unit 13 adds the newly created spatial data to the existing spatial data when the spatial data for which creation is instructed relates to an ungenerated region, and the spatial data generation unit 13 If the instructed spatial data relates to re-creation, spatial data obtained by updating the existing spatial data is generated.
 なお、上述のように、図1では自律行動型ロボット1は、データ提供装置10とロボット2とにおいて構成される場合を説明したが、データ提供装置10の機能は、ロボット2の機能に含まれるものであってもよい。例えば、ロボット2は、データ提供装置10の機能を全て含むものであってもよい。データ提供装置10は、例えば、ロボット2において処理能力が不足する場合に、一時的に機能を代替するものであってもよい。 As described above, FIG. 1 illustrates the case where the autonomous behavior robot 1 includes the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. It may be a thing. For example, the robot 2 may include all the functions of the data providing apparatus 10. For example, the data providing apparatus 10 may temporarily substitute a function when the robot 2 has insufficient processing capability.
 また、本実施形態において「取得」とは、取得する主体が能動的に取得するものであってもよく、また、取得する主体が受動的に取得するものであってもよい。例えば、指定取得部162は、利用者が利用者端末3から送信した空間データの作成の指示を受信することにより取得してもよく、また、利用者が図示しない記憶領域(不図示)に記憶させた空間データの作成の指示を記憶領域から読み出すことにより取得してもよい。 Further, in the present embodiment, “acquisition” may be acquired by the acquiring entity actively, or may be acquired passively by the acquiring entity. For example, the designation acquisition unit 162 may acquire the designation by receiving a spatial data creation instruction transmitted from the user terminal 3 by the user, and the user stores the data in a storage area (not shown) that is not shown. An instruction to create the spatial data may be obtained by reading from the storage area.
 また、自律行動型ロボット1が有する、第1通信制御部11、点群データ生成部12、空間データ生成部13、可視化データ生成部14、撮影対象認識部15、第2通信制御部16、撮影画像取得部111、空間データ提供部112、指示部113、可視化データ提供部161、指定取得部162の各機能部は、本実施形態における自律行動型ロボット1の機能の一例を示したものであり、自律行動型ロボット1が有する機能を限定したものではない。例えば、自律行動型ロボット1は、上記全ての機能部を有している必要はなく、一部の機能部を有するものであってもよい。また、自律行動型ロボット1は、上記以外の他の機能部を有していてもよい。 In addition, the autonomous behavior robot 1 includes a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, a second communication control unit 16, and imaging. The functional units of the image acquisition unit 111, the spatial data provision unit 112, the instruction unit 113, the visualization data provision unit 161, and the designation acquisition unit 162 are examples of functions of the autonomous behavior robot 1 in the present embodiment. The functions of the autonomous behavior robot 1 are not limited. For example, the autonomous behavior type robot 1 does not have to have all the functional units described above, and may have a part of functional units. Moreover, the autonomous behavior type robot 1 may have a function part other than the above.
 また自律行動型ロボット1が有する上記各機能部は、上述の通り、ソフトウェアによって実現されるものとして説明した。しかし、自律行動型ロボット1が有する上記機能の中で少なくとも1つ以上の機能は、ハードウェアによって実現されるものであっても良い。 In addition, the above-described functional units included in the autonomous behavior robot 1 have been described as being realized by software as described above. However, at least one of the functions of the autonomous behavior robot 1 may be realized by hardware.
 また、自律行動型ロボット1が有する上記何れかの機能は、1つの機能を複数の機能に分割して実施してもよい。また、自律行動型ロボット1が有する上記何れか2つ以上の機能を1つの機能に集約して実施してもよい。すなわち、図1は、自律行動型ロボット1が有する機能を機能ブロックで表現したものであり、例えば、各機能がそれぞれ別個のプログラムファイルで構成されていることを示すものではない。 Also, any one of the above functions that the autonomous behavior robot 1 has may be implemented by dividing one function into a plurality of functions. Further, any two or more functions of the autonomous behavior robot 1 may be integrated into one function. That is, FIG. 1 represents the functions of the autonomous behavior robot 1 by function blocks, and does not indicate that each function is configured by a separate program file, for example.
 また、自律行動型ロボット1は、1つの筐体によって実現される装置であっても、ネットワーク等を介して接続された複数の装置から実現されるシステムであってもよい。例えば、自律行動型ロボット1は、その機能の一部または全部をクラウドコンピューティングシステムによって提供されるクラウドサービス等、仮想的な装置によって実現するものであってもよい。すなわち、自律行動型ロボット1は、上記各機能のうち、少なくとも1以上の機能を他の装置において実現するようにしてもよい。また、自律行動型ロボット1は、タブレットPC等の汎用的なコンピュータであってもよく、カーナビゲーション装置等、機能が限定された専用の装置であってもよい。 Further, the autonomous behavior robot 1 may be a device realized by a single housing or a system realized by a plurality of devices connected via a network or the like. For example, the autonomous behavior robot 1 may be realized by a virtual device such as a cloud service provided by a cloud computing system, part or all of its functions. That is, the autonomous behavior robot 1 may realize at least one or more of the above functions in another device. The autonomous behavior robot 1 may be a general-purpose computer such as a tablet PC, or may be a dedicated device with limited functions such as a car navigation device.
 また、自律行動型ロボット1は、その機能の一部または全部をロボット2または利用者端末3において実現するものであってもよい。 Further, the autonomous behavior type robot 1 may realize part or all of the functions in the robot 2 or the user terminal 3.
 次に、図2を用いて、自律行動型ロボット1のハードウェア構成を説明する。図2は、実施形態における自律行動型ロボット1のハードウェア構成の一例を示すブロック図である。 Next, the hardware configuration of the autonomous behavior robot 1 will be described with reference to FIG. FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot 1 according to the embodiment.
 自律行動型ロボット1は、CPU(Central Processing Unit)101、RAM(Random Access Memory)102、ROM(Read Only Memory)103、タッチパネル104および通信I/F(Interface)105を有する。自律行動型ロボット1は、図1で説明した自律行動型ロボット制御プログラムを実行する装置である。 The autonomous behavior type robot 1 includes a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, a touch panel 104, and a communication I / F (Interface) 105. The autonomous behavior type robot 1 is a device that executes the autonomous behavior type robot control program described in FIG.
 CPU101は、RAM102またはROM103に記憶された自律行動型ロボット制御プログラムを実行することにより、自律行動型ロボット1の制御を行う。自律行動型ロボット制御プログラムは、例えば、自律行動型ロボット制御プログラムを記録した記録媒体、又はネットワークを介したプログラム配信サーバ等から取得されて、ROM103にインストールされ、CPU101から読出されて実行される。 The CPU 101 controls the autonomous behavior robot 1 by executing the autonomous behavior robot control program stored in the RAM 102 or the ROM 103. The autonomous behavior robot control program is acquired from, for example, a recording medium that records the autonomous behavior robot control program or a program distribution server via a network, installed in the ROM 103, read from the CPU 101, and executed.
 タッチパネル104は、操作入力機能と表示機能(操作表示機能)を有する。タッチパネル104は、自律行動型ロボット1の利用者に対して指先又はタッチペン等を用いた操作入力を可能にする。本実施形態における自律行動型ロボット1は操作表示機能を有するタッチパネル104を用いる場合を説明するが、自律行動型ロボット1は、表示機能を有する表示装置と操作入力機能を有する操作入力装置とを別個に有するものであってもよい。その場合、タッチパネル104の表示画面は表示装置の表示画面、タッチパネル104の操作は操作入力装置の操作として実施することができる。なお、タッチパネル104は、ヘッドマウント型、メガネ型、腕時計型のディスプレイ等の種々の形態によって実現されてもよい。 The touch panel 104 has an operation input function and a display function (operation display function). The touch panel 104 enables operation input using a fingertip or a touch pen to the user of the autonomous behavior robot 1. The autonomous behavior robot 1 in this embodiment will be described using a touch panel 104 having an operation display function. However, the autonomous behavior robot 1 has a display device having a display function and an operation input device having an operation input function separately. You may have. In that case, the display screen of the touch panel 104 can be implemented as a display screen of the display device, and the operation of the touch panel 104 can be implemented as an operation of the operation input device. Note that the touch panel 104 may be realized in various forms such as a head mount type, a glasses type, and a wristwatch type display.
 通信I/F105は、通信用のI/Fである。通信I/F105は、例えば、無線LAN、有線LAN、赤外線等の近距離無線通信を実行する。図2において通信用のI/Fは通信I/F105のみを図示するが、自律行動型ロボット1は複数の通信方式においてそれぞれの通信用のI/Fを有するものであってもよい。 The communication I / F 105 is a communication I / F. The communication I / F 105 executes short-range wireless communication such as a wireless LAN, a wired LAN, and infrared rays. Although only the communication I / F 105 is illustrated in FIG. 2 as the communication I / F, the autonomous behavior robot 1 may have each communication I / F in a plurality of communication methods.
 次に、図3を用いて、ロボット制御プログラムの可視化データ提供に係る動作を説明する。図3は、実施形態におけるロボット制御プログラムの動作の一例を示すフローチャートである。以下のフローチャートの説明において、動作の実行主体は自律行動型ロボット1であるものとして説明するが、それぞれの動作は、上述した自律行動型ロボット1の各機能において実行される。 Next, the operation related to the visualization data provision of the robot control program will be described with reference to FIG. FIG. 3 is a flowchart illustrating an example of the operation of the robot control program in the embodiment. In the following description of the flowcharts, it is assumed that the execution subject of the operation is the autonomous behavior type robot 1, but each operation is executed in each function of the autonomous behavior type robot 1 described above.
 図3において、自律行動型ロボット1は、撮影画像を取得したか否かを判断する(ステップS11)。撮影画像を取得したか否かの判断は、撮影画像取得部111がロボット2から、撮影画像を取得したか否かで判断することができる。撮影画像を取得したか否かの判断は、撮影画像の処理単位で判断される。例えば、撮影画像が動画である場合、動画はロボット2から連続して送信されるため、撮影画像を取得したか否かの判断は、取得された動画のフレーム数またはデータ量等が所定の値に達したか否かで行うことができる。撮影画像の取得は、移動式ロボットが主体となって撮影画像を送信するものであっても、撮影画像取得部111が主体となって移動式ロボットから撮影画像を引き取るものであってもよい。撮影画像を取得していないと判断した場合(ステップS11:NO)、自律行動型ロボット1は、ステップS11の処理を繰返し、撮影画像が取得されるのを待機する。 In FIG. 3, the autonomous behavior robot 1 determines whether a captured image has been acquired (step S11). Whether or not a captured image has been acquired can be determined by whether or not the captured image acquisition unit 111 has acquired a captured image from the robot 2. The determination as to whether or not a captured image has been acquired is made on a per-process basis for captured images. For example, since the moving image is continuously transmitted from the robot 2 when the captured image is a moving image, the determination as to whether or not the captured image has been acquired is based on whether the number of frames or the data amount of the acquired moving image is a predetermined value It can be done by whether or not. The acquired captured image may be acquired by the mobile robot as a main component for transmitting the captured image or may be acquired by the captured image acquisition unit 111 as the main component for taking the captured image from the mobile robot. When it is determined that a captured image has not been acquired (step S11: NO), the autonomous behavior robot 1 repeats the process of step S11 and waits for a captured image to be acquired.
 一方、撮影画像を取得したと判断した場合(ステップS12:NO)、自律行動型ロボット1は、点群データを生成する(ステップS12)。点群データの生成は、点群データ生成部12が、例えば、撮影画像中の輝度の変化が所定の輝度変化閾値より大きい点を特徴点として検出し、検出された特徴点に対して三次元の座標を与えることにより実行することができる。特徴点の検出は、例えば、撮影画像に対して微分処理を行い、階調変化が所定の階調変化閾値より大きい部分を検出するようにしてもよい。また、特徴点に対する座標の付与は、異なる撮影角度から撮影された同一の特徴点を検出することにより実行してもよい。ステップS11における撮影画像の取得の有無の判断は、複数の方向から撮影された撮影画像を取得したか否かで判断することができる。 On the other hand, when it is determined that a captured image has been acquired (step S12: NO), the autonomous behavior robot 1 generates point cloud data (step S12). For the generation of the point cloud data, the point cloud data generation unit 12 detects, for example, a point where the luminance change in the photographed image is larger than a predetermined luminance change threshold as a feature point, and the detected feature point is three-dimensionally generated. Can be performed by giving the coordinates of For example, the feature point may be detected by performing a differentiation process on the captured image and detecting a portion where the gradation change is larger than a predetermined gradation change threshold. Moreover, you may perform the provision of the coordinate with respect to a feature point by detecting the same feature point image | photographed from different imaging | photography angles. Whether or not a captured image is acquired in step S11 can be determined based on whether or not captured images captured from a plurality of directions have been acquired.
 ステップS12の処理を実行した後、自律行動型ロボット1は、空間データを生成する(ステップS13)。空間データの生成は、空間データ生成部13が、例えば、点群データをハフ変換することにより実行することができる。 After executing the process of step S12, the autonomous behavior type robot 1 generates spatial data (step S13). The generation of the spatial data can be executed by the spatial data generation unit 13 by, for example, Hough transforming the point cloud data.
 ステップS13の処理を実行した後、自律行動型ロボット1は、生成した空間データをロボット2に対して提供する。ロボット2に対する空間データの提供は、図3に示すように空間データ生成の都度、逐次提供するようにしてもよく、また、ステップS11~ステップS18で示す処理とは非同期に提供するようにしてもよい。空間データを提供されたロボット2は、空間データに基づき移動可能範囲を把握することが可能となる。 After executing the process of step S13, the autonomous behavior robot 1 provides the generated spatial data to the robot 2. The provision of the spatial data to the robot 2 may be sequentially provided as the spatial data is generated as shown in FIG. 3, or may be provided asynchronously with the processing shown in steps S11 to S18. Good. The robot 2 provided with the spatial data can grasp the movable range based on the spatial data.
 ステップS14の処理を実行した後、自律行動型ロボット1は、空間要素を認識するか否かを判断する(ステップS15)。空間要素を認識するか否かの判断は、例えば、撮影対象認識部15に対して空間要素を認識するか否かの設定を行うことにより実行することができる。なお、空間要素を認識すると判断した場合であっても、認識に失敗した場合は、空間要素を認識しないと判断するようにしてもよい。 After executing the process of step S14, the autonomous behavior robot 1 determines whether or not to recognize a spatial element (step S15). The determination of whether or not to recognize a spatial element can be executed by, for example, setting the imaging target recognition unit 15 whether or not to recognize a spatial element. Even if it is determined that the spatial element is recognized, if the recognition fails, it may be determined that the spatial element is not recognized.
 空間要素を認識すると判断した場合(ステップS15:YES)、自律行動型ロボット1は、第1可視化データを生成する(ステップS16)。第1可視化データの生成は、可視化データ生成部14において実行することができる。第1可視化データとは、撮影対象認識部15が空間要素を認識した上で生成される可視化データである。例えば、撮影対象認識部15が空間要素をテーブルであると判断した場合、可視化データ生成部14は、テーブルの上面が撮影されておらず点群データを有さない場合であっても、テーブルの上面は平らであるものとして可視化データを生成することができる。また、空間要素が壁であると判断された場合、可視化データ生成部14は、撮影されていない部分も平面であるとして可視化データを生成することができる。 If it is determined that the spatial element is recognized (step S15: YES), the autonomous behavior robot 1 generates first visualization data (step S16). The generation of the first visualization data can be executed in the visualization data generation unit 14. The first visualization data is visualization data generated after the imaging object recognition unit 15 recognizes a spatial element. For example, when the imaging target recognition unit 15 determines that the spatial element is a table, the visualization data generation unit 14 does not have point cloud data even if the top surface of the table is not captured. Visualization data can be generated as if the top surface is flat. Further, when it is determined that the spatial element is a wall, the visualization data generation unit 14 can generate visualization data by assuming that a portion that has not been shot is also a plane.
 空間要素を認識しないと判断した場合(ステップS15:NO)、自律行動型ロボット1は、第2可視化データを生成する(ステップS17)。第2可視化データの生成は、可視化データ生成部14において実行することができる。第2可視化データとは、撮影対象認識部15が空間要素を認識しないで、すなわち、撮影画像から生成された点群データ及び空間データに基づき生成される可視化データである。自律行動型ロボット1は、空間要素の認識処理を行わないことで、処理負荷を軽減することができる。 If it is determined that the spatial element is not recognized (step S15: NO), the autonomous behavior robot 1 generates second visualization data (step S17). The generation of the second visualization data can be executed in the visualization data generation unit 14. The second visualization data is visualization data that is generated based on the point cloud data and the spatial data generated from the captured image without the imaging target recognition unit 15 recognizing the spatial element. The autonomous behavior robot 1 can reduce the processing load by not performing the spatial element recognition process.
 ステップS16の処理またはステップS17の処理を実行した後、自律行動型ロボット1は、可視化データを提供する(ステップS18)。可視化データの提供は、可視化データ生成部14において生成された可視化データを可視化データ提供部161が利用者端末3に提供することにより実行される。自律行動型ロボット1は、例えば利用者端末3からの要求に応じて可視化データを生成して提供するようにしてもよい。ステップS18の処理を実行した後、自律行動型ロボット1は、フローチャートで示した動作を終了する。 After executing the process of step S16 or the process of step S17, the autonomous behavior type robot 1 provides visualization data (step S18). The provision of the visualization data is executed when the visualization data provision unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3. The autonomous behavior type robot 1 may generate and provide visualization data in response to a request from the user terminal 3, for example. After executing the process of step S18, the autonomous behavior robot 1 ends the operation shown in the flowchart.
 次に、図4を用いて、ロボット制御プログラムの空間データ作成指示に係る動作を説明する。図4は、実施形態におけるロボット制御プログラムの動作の他の一例を示すフローチャートである。 Next, the operation related to the space data creation instruction of the robot control program will be described with reference to FIG. FIG. 4 is a flowchart illustrating another example of the operation of the robot control program in the embodiment.
 図4において、自律行動型ロボット1は、作成指示を取得したか否かを判断する(ステップS21)。作成指示を取得したか否かの判断は、指定取得部162が利用者端末3から空間データを作成することを指示する作成指示を取得したか否かで判断することができる。作成指示は、例えば、利用者が利用者端末3から、空間データの再取得の操作を実行することにより取得される。また、作成指示は、利用者が利用者端末3から、未生成の領域についての空間データの取得の操作を実行することにより取得されてもよい。作成指示を取得していないと判断した場合(ステップS21:NO)、自律行動型ロボット1は、ステップS21の処理を繰返し、作成指示が取得されるのを待機する。 In FIG. 4, the autonomous behavior robot 1 determines whether or not a creation instruction has been acquired (step S21). The determination as to whether or not the creation instruction has been acquired can be made based on whether or not the designation acquisition unit 162 has acquired a creation instruction that instructs the user terminal 3 to create spatial data. The creation instruction is acquired, for example, when the user executes an operation for reacquisition of the spatial data from the user terminal 3. In addition, the creation instruction may be acquired when the user performs an operation of acquiring spatial data for an ungenerated area from the user terminal 3. When it is determined that the creation instruction has not been acquired (step S21: NO), the autonomous behavior robot 1 repeats the process of step S21 and waits for the creation instruction to be acquired.
 一方、作成指示を取得したと判断した場合(ステップS21:YES)、自律行動型ロボット1は、空間データの作成が指示された領域における撮影指示を実行する(ステップS22)。撮影指示は、指示部113がロボット2に対して撮影を指示する作成指示をすることにより実行することができる。撮影指示には、ロボット2の撮影部21によって撮影すべき領域や、撮影位置を含むことができる。撮影指示を受けたロボット2が撮影画像を自律行動型ロボット1に提供することにより、図3に示した動作が実行されて、指示された領域における空間データ及び可視化データが作成される。ステップS22の処理を実行した後、自律行動型ロボット1は、フローチャートで示した動作を終了する。 On the other hand, if it is determined that a creation instruction has been acquired (step S21: YES), the autonomous behavior robot 1 executes a shooting instruction in an area instructed to create spatial data (step S22). The shooting instruction can be executed when the instruction unit 113 instructs the robot 2 to make a shooting instruction. The shooting instruction can include an area to be shot by the shooting unit 21 of the robot 2 and a shooting position. When the robot 2 that has received the imaging instruction provides the captured image to the autonomous behavior robot 1, the operation illustrated in FIG. 3 is executed, and spatial data and visualization data in the instructed area are created. After executing the process of step S22, the autonomous behavior robot 1 ends the operation shown in the flowchart.
 なお、図4は、利用者からの明示的な指示である作成指示を利用者端末3から取得した場合の動作であるが、自律行動型ロボット1は、例えば、未生成の空間データがある場合、または精度の高い空間データの生成に失敗した場合、利用者からの明示的な指示がなくても撮影指示をするようにしてもよい。 FIG. 4 shows the operation when a creation instruction, which is an explicit instruction from the user, is acquired from the user terminal 3, but the autonomous behavior robot 1 has, for example, ungenerated spatial data. Alternatively, if the generation of high-accuracy spatial data fails, a shooting instruction may be issued without an explicit instruction from the user.
 また、本実施形態で説明したロボット制御プログラムの動作(ロボット制御方法)における各ステップにおける処理は、実行順序を限定するものではない。 Further, the processing order in each step in the operation of the robot control program (robot control method) described in this embodiment does not limit the execution order.
 次に、図5から図10を用いて、自律行動型ロボット1が提供する可視化データに基づく利用者端末3の表示を説明する。図5から図10は、実施形態における利用者端末3の表示の一例を示す図である。図5から図10は、可視化データ提供部161から可視化データとして提供されたWebページを、利用者端末3として例示するスマートフォンのタッチパネルにおいて表示した表示例である。 Next, the display of the user terminal 3 based on the visualization data provided by the autonomous behavior robot 1 will be described with reference to FIGS. 5 to 10 are diagrams illustrating examples of display of the user terminal 3 according to the embodiment. 5 to 10 are display examples in which a Web page provided as visualization data from the visualization data providing unit 161 is displayed on the touch panel of a smartphone exemplified as the user terminal 3.
 図5において、利用者端末3には、ロボット2において撮影されたリビングルームの撮影画像に基づき、可視化データ生成部14において生成された3Dの可視化データを表示したものである。可視化データ生成部14は、点群データにおいて生成された家具等の図形をラスタライズして描画している。図5では、点群データの特徴点をハフ変換して線分(直線または曲線)を抽出し、家具等の形状を3Dの線分で表現した空間データを生成した。図では、家具等の形状がx方向、y方向およびz方向の直線で表現されている場合を示している。また、生成されたx方向、y方向およびz方向の直線の組合せにおいて形成される平面の法線方向が垂直方向のx-y平面、平面の法線方向が水平方向のx-z平面、y-z平面について、平面に付与するテクスチャの明度を変えている。例えば、テーブルやシステムキッチンのワークテーブルの上面であるx-y平面に対しては明度を低く(色を濃く)している。また、x-z平面に対しては明度を中程度に、y-z平面に対しては明度を高くしている。家具等の平面の表現方法である明度を変更することにより、利用者が家具等の形状を認識しやすくなる。 In FIG. 5, the user terminal 3 displays the 3D visualization data generated by the visualization data generation unit 14 based on the captured image of the living room captured by the robot 2. The visualization data generation unit 14 rasterizes and draws a figure such as furniture generated in the point cloud data. In FIG. 5, feature points of point cloud data are subjected to Hough transform to extract line segments (straight lines or curves), and spatial data in which the shape of furniture or the like is represented by 3D line segments is generated. In the figure, the case where the shape of furniture or the like is represented by straight lines in the x direction, the y direction, and the z direction is shown. In addition, the plane formed in the combination of straight lines in the x, y, and z directions has a normal direction of the xy plane in the vertical direction, a plane normal direction in the horizontal direction of the xz plane, y -For the z plane, the brightness of the texture applied to the plane is changed. For example, the brightness on the xy plane, which is the upper surface of the work table of the table or system kitchen, is low (the color is dark). In addition, the brightness is moderate for the xz plane and the brightness is high for the yz plane. By changing the brightness, which is a method for expressing a plane such as furniture, the user can easily recognize the shape of the furniture.
 また、図において床面から所定の高さを有する壁や天井等においては点群データを削除して空間データを生成している。ロボット2は床面を移動するため、天井等の空間データは不要となる。ロボット2から所定の距離より遠い範囲について空間データを生成しないようにすることにより、空間データのデータ量を削減することができるとともに、可視化データを見やすくすることができる。 In addition, point cloud data is deleted from a wall or ceiling having a predetermined height from the floor in the figure to generate spatial data. Since the robot 2 moves on the floor surface, space data such as a ceiling is not necessary. By not generating spatial data for a range farther than a predetermined distance from the robot 2, the amount of spatial data can be reduced and the visualization data can be easily viewed.
 利用者端末3には、「再取得」のボタン31が表示されている。利用者が再取得のボタン31を押下(操作)することにより、自律行動型ロボット1に対して作成指示が送信される。作成指示が送信されることにより、自律行動型ロボット1はロボット2に対して撮影指示を提供し、再取得した撮影画像に基づき空間データおよび可視化データを再作成し、可視化データを利用者端末3に再提供する。すなわち、再取得のボタン31が押下されることにより、利用者端末3に表示されるリビングルームの可視化データが更新される。自律行動型ロボット1は、作成指示を受信した後、ロボット2に撮影のための行動を即時に取らせて、早期に空間データ及び可視化データを再作成してもよい。また、自律行動型ロボット1は、ロボット2の移動に対する制御はせずに、感情パラメータ等のロボット2の行動に影響を及ぼすパラメータ(ただし、属性情報と異なるパラメータ)に従ってロボット2を移動させ、ロボット2の位置に応じて撮像指示を提供し、再取得した撮像画像に基づいて空間データ及び可視化データを再作成してもよい。
 上述したように、ロボット2は、内部パラメータ(例えば、感情パラメータ)に基づいて自律的に行動するとしてもよい。たとえば、ロボット2は、ロボット2が同一の場所に一定時間以上滞在しつづけていると判定された場合には「退屈さ」を表す感情パラメータを増加させてもよい。ロボット2は、「退屈さ」が所定値以上となったと判定されたことを要件として、地図において未作成の部分への移動を開始してもよい。そこで、自律行動型ロボット1は、未作成の部分への作成指示を受信したとき、ロボット1の「退屈さ」パラメータを所定値以上に向上させることで、ロボット2の活動を間接的に促してもよい。
 また、自律行動型ロボット1は、作成指示の受信時におけるロボット2の移動計画に影響を及ぼさないようにロボット2に撮影指示を提供してもよい。換言すれば、自律行動型ロボット1は、作成指示の受信時におけるロボット2の移動が部分的にまたは完全に完了した時点で撮影を開始するための移動を行う指示をロボット2に提供してもよい。たとえば、ロボット2が移動目標地点Sまで移動中であり、この移動中にロボット2が作成指示を受信したとする。このときには、ロボット2は移動目標地点S又は現在地から移動目標地点Sまでにある所定の地点(例えば廊下など)への移動まで移動した後、作成指示に基づく行動を開始してもよい。
On the user terminal 3, a “re-acquire” button 31 is displayed. When the user presses (operates) the reacquisition button 31, a creation instruction is transmitted to the autonomous behavior robot 1. When the creation instruction is transmitted, the autonomous behavior robot 1 provides the imaging instruction to the robot 2, recreates the spatial data and the visualization data based on the re-acquired captured image, and uses the visualization data as the user terminal 3. Re-offer to. That is, when the reacquisition button 31 is pressed, the visualization data of the living room displayed on the user terminal 3 is updated. After receiving the creation instruction, the autonomous behavior type robot 1 may cause the robot 2 to immediately take an action for photographing and recreate the spatial data and the visualization data at an early stage. The autonomous behavior type robot 1 does not control the movement of the robot 2 but moves the robot 2 according to parameters (however, parameters different from the attribute information) that affect the behavior of the robot 2 such as emotion parameters. An imaging instruction may be provided according to the position of 2, and spatial data and visualization data may be recreated based on the re-acquired captured image.
As described above, the robot 2 may act autonomously based on an internal parameter (for example, an emotion parameter). For example, the robot 2 may increase the emotion parameter indicating “boring” when it is determined that the robot 2 has been staying at the same place for a certain period of time. The robot 2 may start moving to an uncreated portion of the map on the condition that it is determined that “boring” has become a predetermined value or more. Therefore, the autonomous behavior robot 1 indirectly promotes the activity of the robot 2 by improving the “boring” parameter of the robot 1 to a predetermined value or more when receiving an instruction to create an uncreated part. Also good.
Further, the autonomous behavior type robot 1 may provide a shooting instruction to the robot 2 so as not to affect the movement plan of the robot 2 when the creation instruction is received. In other words, the autonomous behavior robot 1 may provide the robot 2 with an instruction to move to start shooting when the movement of the robot 2 at the time of receiving the creation instruction is partially or completely completed. Good. For example, it is assumed that the robot 2 is moving to the movement target point S, and the robot 2 receives a creation instruction during this movement. At this time, the robot 2 may move to a movement target point S or a predetermined point (for example, a hallway) from the current location to the movement target point S and then start an action based on the creation instruction.
 また、利用者端末3には、「2D表示」のボタン32が表示されている。2D表示のボタン32を押下することにより、利用者端末3は、自律行動型ロボット1に対して3Dの可視化データを2Dの可視化データに変更する指示を送信する。自律行動型ロボット1は生成された空間データを2Dで表現した可視化データを生成し、利用者端末3に提供する。2D表示のボタン32が押下されたときの表示を図6において説明する。 In addition, a “2D display” button 32 is displayed on the user terminal 3. By pressing the 2D display button 32, the user terminal 3 transmits an instruction to change the 3D visualization data to 2D visualization data to the autonomous behavior robot 1. The autonomous behavior robot 1 generates visualization data that represents the generated spatial data in 2D and provides it to the user terminal 3. The display when the 2D display button 32 is pressed will be described with reference to FIG.
 なお、利用者は、利用者端末3においてタッチパネルをピンチインまたはピンチアウトすることにより、可視化データの拡大縮小を行うことができる。また、利用者は、タッチパネルをスライドすることにより、3D表示の視点を移動させることができる。 Note that the user can enlarge or reduce the visualization data by pinching in or out the touch panel on the user terminal 3. Further, the user can move the viewpoint of 3D display by sliding the touch panel.
 図6は、図5において2D表示のボタン32が押下されたときの表示例である。 FIG. 6 is a display example when the 2D display button 32 in FIG. 5 is pressed.
 図6(A)において、利用者端末3には、図5で表示された2Dの可視化データを2Dに変換した可視化データが表示されている。利用者端末3には、「3D表示」のボタン33が表示されている。3D表示のボタン33を押下することにより、利用者端末3は、自律行動型ロボット1に対して2Dの可視化データを3Dの可視化データに変更する指示を送信する。自律行動型ロボット1は生成された空間データを3Dで表現した可視化データを生成し、利用者端末3に提供する。3D表示のボタン33が押下されたときの表示は、図5において説明した表示となる。 6A, the user terminal 3 displays visualization data obtained by converting the 2D visualization data displayed in FIG. 5 into 2D. A “3D display” button 33 is displayed on the user terminal 3. By pressing the 3D display button 33, the user terminal 3 transmits an instruction to change the 2D visualization data to 3D visualization data to the autonomous behavior robot 1. The autonomous behavior robot 1 generates visualization data that represents the generated spatial data in 3D and provides it to the user terminal 3. The display when the 3D display button 33 is pressed is the display described in FIG.
 図6(A)において、リビングには、3D表示では表示されなかったキッチンが表示されている。3D表示は視点位置から見た場合の表示であるため、壁や家具で隠れた部分については表示することができない。しかし、2D表示にすることにより、部屋を上部から俯瞰した可視化データを表示することができるので、部屋全体の形状を把握しやすくなる。境界線34は、空間データの生成範囲を示す。すなわち、境界線34より上部の領域は、ロボット2において撮影画像が取得されておらず、空間データが未生成の領域であることを示す。利用者はタッチパネルに表示された境界線34または境界線34より上側の領域を指定して再取得のボタン31を押下することにより、ロボット2は指定された領域に移動して部屋を撮影する。自律行動型ロボット1は、撮影画像に基づき未生成の領域の空間データを生成して空間データを更新する。空間データの更新は、例えば、元の空間データと新たに生成された空間データをマッチングさせて連続した空間データを生成することにより実行することができる。自律行動型ロボット1は、更新した空間データに基づき可視化データを生成して利用者端末3に提供する。なお、図示する家形のマークhは、ロボット2が充電のために戻るホームポジションを示す。 In FIG. 6 (A), a kitchen that was not displayed in 3D display is displayed in the living room. Since the 3D display is a display when viewed from the viewpoint position, a portion hidden by a wall or furniture cannot be displayed. However, by using 2D display, it is possible to display visualization data obtained by looking down on the room from above, making it easy to grasp the shape of the entire room. The boundary line 34 indicates the generation range of the spatial data. That is, the area above the boundary line 34 indicates that the captured image has not been acquired by the robot 2 and the spatial data has not been generated. When the user designates the boundary line 34 displayed on the touch panel or an area above the boundary line 34 and presses the re-acquisition button 31, the robot 2 moves to the designated area and photographs the room. The autonomous behavior type robot 1 generates spatial data of an ungenerated area based on the captured image and updates the spatial data. The update of the spatial data can be executed by, for example, matching the original spatial data and the newly generated spatial data to generate continuous spatial data. The autonomous behavior type robot 1 generates visualization data based on the updated spatial data and provides it to the user terminal 3. The house-shaped mark h shown in the figure indicates a home position where the robot 2 returns for charging.
 図6(B)は、利用者端末3に、自律行動型ロボット1において更新された可視化データが表示されていることを示している。図6(B)において、リビングの上側には細長い廊下が存在し、さらに廊下の右側に洋室が存在していることを示している。なお、洋室のドアが閉まっている場合、洋室の空間データは生成されず、例えばドアの位置を表示することにより、未生成の領域がまだ存在することを境界線で表示するようにしてもよい。なお、廊下には検出できなかったドアが存在しており、実際には廊下より上側の領域が存在している。 FIG. 6B shows that the visualization data updated in the autonomous behavior robot 1 is displayed on the user terminal 3. FIG. 6B shows that a long and narrow corridor exists on the upper side of the living room, and that a Western-style room exists on the right side of the corridor. When the Western-style door is closed, the Western-style space data is not generated. For example, by displaying the position of the door, it may be displayed with a boundary line that an ungenerated area still exists. . Note that there are doors that could not be detected in the hallway, and there is actually an area above the hallway.
 図7から図8は、2Dと簡易3Dの表示の切り替えを示している。図7は、利用者端末3に、自宅の全部屋において生成された2Dの可視化データを表示していることを示している。図7において利用者が3D表示のボタン33を押下すると、図8に示す簡易3Dの表示に切り替わる。ここで簡易3Dの表示とは、2Dの表示を変形し、斜め上空から俯瞰した鳥瞰図のような表示態様をいう。簡易3Dの表示には、例えば、家具等に対して予め定められた高さ方向の形状を適応して立体的な表示としたものを含んでいてもよい。簡易3Dにおいて可視化データを表示することにより、可視化データのデータ量を増加せずに全部屋の形状を把握しやすくなる。図8に示す2D表示のボタン32を押下すると、図7に示す2Dの表示に切り替わる。 7 to 8 show switching between 2D display and simple 3D display. FIG. 7 shows that 2D visualization data generated in all rooms at home is displayed on the user terminal 3. When the user presses the 3D display button 33 in FIG. 7, the display is switched to the simple 3D display shown in FIG. Here, the simplified 3D display refers to a display form such as a bird's-eye view obtained by transforming the 2D display and looking down from above. The simple 3D display may include, for example, a three-dimensional display adapted to a predetermined shape in the height direction for furniture or the like. By displaying the visualization data in simple 3D, it becomes easy to grasp the shape of the entire room without increasing the data amount of the visualization data. When the 2D display button 32 shown in FIG. 8 is pressed, the display is switched to the 2D display shown in FIG.
 図9は、複数に区分されたエリアで生成された可視化データの表示例である。利用者端末3には、エリア別可視化データ35が表示されている。エリア別可視化データ35は、リビングルーム、廊下、寝室、子供部屋等に区分されたエリアをスクロール可能に表示している。例えば、利用者が1つのエリアを選択して押下した場合、押下されたエリアの可視化データが表示される。例えば、図5は、エリア別可視化データ35においてリビングルームが選択された場合の可視化データである。利用者が図5等の下部に表示されている「他の部屋を見る」を押下した場合、図9に示すエリア別可視化データ35を表示して他の部屋を選択できるようにしてもよい。 FIG. 9 is a display example of visualization data generated in a plurality of divided areas. The user terminal 3 displays visualization data 35 by area. The area-specific visualization data 35 displays scrollable areas divided into living rooms, hallways, bedrooms, children's rooms, and the like. For example, when the user selects and presses one area, the visualization data of the pressed area is displayed. For example, FIG. 5 is visualization data when a living room is selected in the visualization data 35 by area. When the user presses “view other rooms” displayed in the lower part of FIG. 5 or the like, the area-specific visualization data 35 shown in FIG. 9 may be displayed so that other rooms can be selected.
 図10は、撮影部の撮影位置による空間要素の誤認識を説明する表示例である。 FIG. 10 is a display example for explaining misrecognition of a spatial element depending on the photographing position of the photographing unit.
 図10(A)において、利用者端末3には、ロボット2が撮影位置36Aから図示矢印の方向において撮影した撮影画像に基づき生成された棚の可視化画像37Aが表示されている。撮影位置36Aからは棚の裏側は撮影できないため、空間データ生成部13は、棚の形状を正しく認識していない。誤認識された棚の形状は、壁まで存在している。したがって、自律行動型ロボット1は、棚の裏側を移動できないと判断する。一方、利用者は、可視化画像37Aの形状を視認することにより、自律行動型ロボット1が棚の形状を誤認識していることを容易に確認することができる。 10A, the user terminal 3 displays a visualized image 37A of a shelf generated based on a photographed image photographed by the robot 2 in the direction of the arrow shown from the photographing position 36A. Since the back side of the shelf cannot be photographed from the photographing position 36A, the spatial data generation unit 13 does not correctly recognize the shape of the shelf. The shape of the erroneously recognized shelf exists up to the wall. Accordingly, the autonomous behavior robot 1 determines that it cannot move on the back side of the shelf. On the other hand, the user can easily confirm that the autonomous behavior robot 1 erroneously recognizes the shape of the shelf by visually recognizing the shape of the visualized image 37A.
 図10(B)は、自律行動型ロボット1において更新された可視化データが表示されていることを示している。図6(B)において、利用者端末3には、ロボット2が撮影位置36Bから図示矢印の方向において撮影した撮影画像に基づき生成された棚の可視化画像37Bが表示されている。撮影位置36Bからは棚の裏側が撮影できるため、空間データ生成部13は、撮影位置36Aから撮影された撮影画像と合わせ、棚の形状を正しく認識することができる。正しく認識された棚の形状は、壁との隙間を有している。したがって、自律行動型ロボット1は、棚の裏側を移動できると判断することができる。利用者は、可視化画像37Bの形状を視認することにより、自律行動型ロボット1が棚の形状を正しく認識していることを容易に確認することができる。なお、利用者は撮影位置36Bの位置と撮影方向とを利用者端末3から指定してロボット2において撮影する指示をすることができる。例えば、この指定方法は、利用者端末3の画面に対して、指先を触れた位置から直線的に滑らせる操作でよく、指先の軌跡の始点を撮影位置とし、指先を滑らせた方向を撮影方向として指定する。この操作に連動して、画面には可視化データに重ねて矢印のアイコンが表示される。利用者は、棚の形状を知っているため、自律行動型ロボット1が誤認識している棚の形状を正しく認識するための撮影位置と撮影方向とを指示することができる。 FIG. 10B shows that the visualization data updated in the autonomous behavior robot 1 is displayed. In FIG. 6B, a visualized image 37B of a shelf generated based on a photographed image taken by the robot 2 in the direction indicated by the arrow from the photographing position 36B is displayed on the user terminal 3. Since the back side of the shelf can be photographed from the photographing position 36B, the spatial data generation unit 13 can correctly recognize the shape of the shelf together with the photographed image photographed from the photographing position 36A. A correctly recognized shelf shape has a gap with the wall. Therefore, it can be determined that the autonomous behavior robot 1 can move on the back side of the shelf. The user can easily confirm that the autonomous behavior robot 1 correctly recognizes the shape of the shelf by visually recognizing the shape of the visualized image 37B. Note that the user can instruct the robot 2 to take a picture by designating the position of the photographing position 36B and the photographing direction from the user terminal 3. For example, this designation method may be an operation of sliding on the screen of the user terminal 3 linearly from the position where the fingertip is touched. The starting point of the trajectory of the fingertip is set as the shooting position, and the direction in which the fingertip is slid is shot. Specify as direction. In conjunction with this operation, an arrow icon is displayed over the visualization data on the screen. Since the user knows the shape of the shelf, the user can instruct a shooting position and a shooting direction for correctly recognizing the shape of the shelf that the autonomous behavior robot 1 has erroneously recognized.
 なお、本実施形態で説明した装置を構成する機能を実現するためのプログラムを、コンピュータ読み取り可能な記録媒体に記録して、当該記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより、本実施形態の上述した種々の処理を行ってもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものであってもよい。また、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、フラッシュメモリ等の書き込み可能な不揮発性メモリ、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。 Note that a program for realizing the functions constituting the apparatus described in this embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed. Thus, the various processes described above in the present embodiment may be performed. Here, the “computer system” may include an OS and hardware such as peripheral devices. Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used. The “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
 さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(例えばDRAM(Dynamic Random Access Memory))のように、一定時間プログラムを保持しているものも含むものとする。また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。また、上記プログラムは、前述した機能の一部を実現するためのものであっても良い。さらに、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組合せで実現するもの、いわゆる差分ファイル(差分プログラム)であっても良い。 Further, the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time. The program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. The program may be for realizing a part of the functions described above. Furthermore, what implement | achieves the function mentioned above in combination with the program already recorded on the computer system, and what is called a difference file (difference program) may be sufficient.
 以上、本発明の実施形態について、図面を参照して説明してきたが、具体的な構成はこの実施形態に限定されるものではなく、本発明の趣旨を逸脱しない範囲においての種々の変更も含まれる。 The embodiment of the present invention has been described above with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes various modifications within the scope of the present invention. It is.
 こうした変形例として、例えば、図1の可視化データ提供部161は、可視化データとともに、その元となった画像(撮像画像)を利用者端末3に提供してもよい。この場合、利用者端末3は、可視化データを表示しているときに、利用者の操作を検知することを条件として当該撮像画像を表示してもよい。例えば、利用者が利用者端末3に表示されている間取りの一部を指定することで、そこを撮影した画像が表示されるようにしてもよい。すなわち、空間要素毎にその特定に用いた画像が蓄積され、利用者が空間要素を指定した際は、その空間要素に対応付けられた画像が提供される。これにより、利用者は可視化データから認識状態を判断できないときに、画像を用いて認識状態を判断できる。 As such a modification, for example, the visualization data providing unit 161 of FIG. 1 may provide the user terminal 3 with the image (captured image) that is the source of the visualization data. In this case, the user terminal 3 may display the captured image on condition that the user's operation is detected while displaying the visualization data. For example, an image obtained by capturing a part of the floor plan displayed on the user terminal 3 may be displayed by the user. That is, an image used for specifying each spatial element is accumulated, and when the user designates a spatial element, an image associated with the spatial element is provided. Accordingly, when the user cannot determine the recognition state from the visualization data, the user can determine the recognition state using the image.
 更に別の変形例として、図10を用いて説明した再作成箇所の指定方法として、再作成すべき範囲を指定すればよい。この指定方法は、利用者端末3の画面に対して、指先で円を描く要領で指先を滑らせる操作でよく、再作成すべき範囲を囲むように指先を動かすことで指定する。この操作に連動して、画面には指先の軌跡が描かれ、可視化データに重ねるように指先の軌跡が可視化される。 As yet another modification, the range to be recreated may be designated as the method for designating the recreated portion described with reference to FIG. This designation method may be an operation of sliding the fingertip on the screen of the user terminal 3 in a manner of drawing a circle with the fingertip, and is designated by moving the fingertip so as to surround the range to be recreated. In conjunction with this operation, the fingertip trajectory is drawn on the screen, and the fingertip trajectory is visualized so as to be superimposed on the visualization data.
 ロボット2の内部パラメータセットが、センサなどにより検知されたロボット2の外部情報に応じて変動するパラメータセットである場合、ロボット2の当該空間における内部パラメータセットに応じて空間の名称または空間の表示態様が設定されてもよい。
 第1例として、当該内部パラメータセットに含まれるパラメータが所定の条件を充足した場合に、当該パラメータの種別に応じた名称が当該空間に設定されてもよい。例えば、ロボット2の内部パラメータが、楽しい、寂しいなどのロボット2の感情を示すパラメータセットであれば、ある空間において、楽しさを示すパラメータが所定値以上となるという条件を充足した場合に、当該空間に「楽しい部屋」などの名称が付されてもよい。
When the internal parameter set of the robot 2 is a parameter set that varies according to external information of the robot 2 detected by a sensor or the like, the name of the space or the display mode of the space according to the internal parameter set in the space of the robot 2 May be set.
As a first example, when a parameter included in the internal parameter set satisfies a predetermined condition, a name corresponding to the type of the parameter may be set in the space. For example, if the internal parameter of the robot 2 is a parameter set that indicates the emotion of the robot 2 such as fun or lonely, when the condition that the parameter indicating fun is a predetermined value or more in a certain space is satisfied, A name such as “fun room” may be given to the space.
 データ提供装置10は、感情パラメータ制御部(図示せず)を備えてもよい。感情パラメータ制御部(図示せず)は、センサの出力値に応じて、ロボット2の感情パラメータを変化させてもよい。例えば、ロボット2がユーザからのタッチや抱っこを内蔵のタッチセンサにより検出したとき、感情パラメータ制御部(図示せず)は、ロボット2の「楽しい」という感情パラメータを上昇させてもよい。一方、ロボット2のカメラにより撮像された画像にユーザを所定の期間以上にわたって検出しなかったり、強度が高く、かつ、持続時間が短い接触(例えば叩かれた場合などの接触)をタッチセンサにより検出したりしたときには、感情パラメータ制御部は、「楽しい」という感情パラメータを低下させてもよい。ロボット2が部屋R1にいることが検出されている場合に、「楽しい」という感情パラメータ値が所定の閾値を超えたとき、空間データ生成部は部屋R1に対して「楽しい部屋」という名称を設定してもよい。 The data providing apparatus 10 may include an emotion parameter control unit (not shown). The emotion parameter control unit (not shown) may change the emotion parameter of the robot 2 according to the output value of the sensor. For example, when the robot 2 detects a touch or hug from the user by a built-in touch sensor, the emotion parameter control unit (not shown) may increase the emotion parameter “fun” of the robot 2. On the other hand, the touch sensor detects a contact that does not detect the user for a predetermined period or longer in the image captured by the camera of the robot 2 or has a high strength and a short duration (for example, a contact when struck). The emotion parameter control unit may reduce the emotion parameter of “fun”. When it is detected that the robot 2 is in the room R1, when the emotion parameter value “fun” exceeds a predetermined threshold, the spatial data generation unit sets the name “fun room” for the room R1. May be.
 この場合において、一部の種類のパラメータに応じた名称が設定され、残部の種類のパラメータについては当該残部の種類のパラメータに応じた名称が設定されなくてもよい。特に、肯定的な意味合いを示す名称(例えば、楽しい部屋)は付与され、否定的な意味合いを示す名称(例えば、つまらない部屋)は付与を回避されることが好ましい。 In this case, the names corresponding to some types of parameters are set, and the names corresponding to the remaining types of parameters may not be set for the remaining types of parameters. In particular, it is preferable that a name indicating a positive meaning (for example, a fun room) is given and a name indicating a negative meaning (for example, a boring room) is avoided.
 第2例として、当該内部パラメータセットに含まれるパラメータが所定の条件を充足した場合に、当該パラメータの種別に応じた図形または色(楽しさを示すパラメータが所定値以上の場合には丸又はピンクなど)が当該空間に設定されてもよい。 As a second example, when a parameter included in the internal parameter set satisfies a predetermined condition, a figure or color corresponding to the type of the parameter (circle or pink when the parameter indicating fun is a predetermined value or more) Etc.) may be set in the space.
 また、過去に空間で検知された人又は物に応じて、空間の表示又は空間の名称が設定されてもよい。データ提供装置10は、ロボット2のカメラで撮影された画像を取得して、撮影画像に写っている人物を検出する人物検出部(図示せず)を備えてもよい。また、データ提供装置10は、ロボット2のカメラで撮影された画像を取得して、撮影画像に写っている物を検出する物検出部(図示せず)を備えてもよい。例えば、人物検出部により「パパ」という名称の人が所定頻度以上で検出される空間には、当該パパの写真若しくはパパを疑似的に示す図形又は「パパの部屋」のような当該人に応じた名称等の属性が当該空間に設定されてもよい。また、人物検出部により「パパ」という名称の人及び「ママ」という名称の人が所定頻度以上で検出される空間には、「パパとママの部屋」のように、当該所定頻度以上で検出される人の名称等の属性が当該空間に設定されてもよい。ここでいう「所定頻度」とは、単位時間あたりにおける対象人物の検出回数であってもよいし、所定の時間(例えば1時間)における対象人物を検出した時間の長さであってもよい。また、所定人数以上の人が所定頻度以上で検出される空間には、「みんなの部屋」のように、特定の人に依存しない名称等の属性が当該空間に設定されてもよい。 Also, according to the person or thing detected in the space in the past, the display of the space or the name of the space may be set. The data providing apparatus 10 may include a person detection unit (not shown) that acquires an image captured by the camera of the robot 2 and detects a person appearing in the captured image. Further, the data providing apparatus 10 may include an object detection unit (not shown) that acquires an image captured by the camera of the robot 2 and detects an object reflected in the captured image. For example, in a space in which a person named “Daddy” is detected at a predetermined frequency or more by the person detection unit, a picture of the Daddy or a figure showing the Daddy in a pseudo manner or a person such as “Daddy's Room” An attribute such as a name may be set in the space. In addition, in a space where the person named “Dad” and the person named “Mama” are detected at a predetermined frequency or higher by the person detection unit, detection is performed at the predetermined frequency or higher, such as “Dad and Mom's Room”. Attributes such as the name of the person to be performed may be set in the space. Here, the “predetermined frequency” may be the number of times the target person is detected per unit time, or may be the length of time during which the target person is detected in a predetermined time (for example, 1 hour). Further, in a space where a predetermined number of people or more are detected at a predetermined frequency or more, an attribute such as a name that does not depend on a specific person, such as “everyone's room”, may be set in the space.
 ロボット2の内部パラメータ及び人または物に応じて、空間の表示または空間の名称(例えば、「みんなの楽しい部屋」)が設定されてもよい。 Depending on the internal parameters of the robot 2 and people or objects, the display of the space or the name of the space (for example, “everybody's fun room”) may be set.
 また、ロボット2の個体ごとの行動特性を示すパラメータセット(例えば、個体ごとの性格又は性別を示すパラメータセット)に応じて、空間に付される名称や空間の表示が異なってもよい。個体ごとの性格は、利用者端末3を介してユーザが設定してもよいし、データ提供装置10の性格学習部(図示せず)において、ロボット2が備えるセンサの検出結果に基づく学習を行なって獲得されてもよい。例えば、人物検出処理で未知であると判定された人物に抱っこされたことをタッチセンサで検出する経験を学習した場合に、積極的に他人に関わることを示すパラメータを有する個体(人見知りしない個体)と設定されてもよい。人見知りしない個体については、人を示す図形や当該人の名称等の属性が優先して部屋の表示または名称(例えば「みんなの部屋」)に反映されてもよい。また、反対に、未知の人物に叩かれたことをタッチセンサで検出する経験を学習した場合に、知らない人への関わりを避けたいことを示すパラメータを有する個体(人見知りする個体)が設定されてもよい。人見知りする個体については、例えば、内部パラメータが優先して部屋の表示または名称(たとえば「楽しい部屋」)に反映されてもよい。
 ロボットの個体特性として、製造時又は出荷時に、あらかじめ個別の性格が設定されてもよい。
Further, the name given to the space and the display of the space may be different depending on the parameter set indicating the behavior characteristics of each robot 2 (for example, the parameter set indicating the personality or gender of each individual). The personality of each individual may be set by the user via the user terminal 3, and learning based on the detection result of the sensor provided in the robot 2 is performed in the personality learning unit (not shown) of the data providing apparatus 10. May be obtained. For example, an individual who has a parameter indicating that he / she is actively involved with another person when he / she learns the experience of detecting by a touch sensor that he / she has been held by a person who has been determined to be unknown by person detection processing (an individual who is not shy) May be set. For individuals who are not shy, attributes such as a figure indicating a person and the name of the person may be preferentially reflected in the display or name of the room (for example, “Everybody's Room”). On the other hand, when learning the experience of detecting a hit by an unknown person with a touch sensor, an individual with parameters indicating that he / she wants to avoid involvement with an unknown person (an individual who is shy) is set. May be. For an individual who is shy, for example, internal parameters may be preferentially reflected in the display or name of the room (for example, “fun room”).
As the individual characteristics of the robot, individual characters may be set in advance at the time of manufacture or at the time of shipment.
 また、ロボット2の個体ごとの行動特性を示すパラメータセットに応じて、地図や空間要素のなどのテンプレートが変更されてもよい。 Further, a template such as a map or a spatial element may be changed according to a parameter set indicating the behavior characteristics of each robot 2.
 また、地図に合わせてユーザの同伴などのユーザの所定の行動を促すような文字または図形等の表示が出力されてもよい。また、この表示は、ロボット2の性格によって表示されるかどうか、または表示内容が変更されてもよい。例えば、ユーザが所定頻度以上で未識別のものを使っていることを検出した場合には、ロボット2に好奇心旺盛な行動特性のパラメータを有する個体と設定されてもよい。たとえば、撮像画像に基づいて、ユーザが未認識していないモノ(空間要素)に触っており、かつ、ユーザの笑顔を検出したことを要件として、感情パラメータ制御部は、ロボットの「好奇心」を示すパラメータが上昇しやすく下降しにくくなるように設定してもよい。あるいは、単に、所定の観察期間を設定し、この観察期間においてユーザが未識別なものに触れていることを所定回数以上、または、所定時間以上計測したときに「好奇心旺盛な性格」を設定してもよい。このように好奇心旺盛な性格に設定されたロボット2の場合、「こっちに未開拓の領域がありそうなんだけど、連れてって欲しい」ことを示すメッセージまたは図形が出力されてもよい。 Also, a display of characters or figures that prompt the user's predetermined actions such as accompanying the user according to the map may be output. Moreover, whether this display is displayed according to the character of the robot 2, or the display content may be changed. For example, when it is detected that the user uses an unidentified one at a predetermined frequency or more, the robot 2 may be set as an individual having parameters of behavioral characteristics that are curious. For example, based on the captured image, the emotion parameter control unit is required to touch a thing (spatial element) that is not recognized by the user and detect the user's smile. It may be set so that the parameter indicating is easy to rise and difficult to fall. Alternatively, simply set a predetermined observation period, and set a “curiosity” when the user touches an unidentified object for a predetermined number of times or for a predetermined time or more. May be. In the case of the robot 2 set in such a curious character, a message or a graphic indicating that “it seems that there is an undeveloped area over here, but want me to take it” may be output.
 また、ユーザが未識別のものを使っていることを検出した頻度が所定以下の場合、ロボット2は臆病な行動特性を示すパラメータを有する個体と設定されてもよい。臆病な行動特性を示すロボット2の場合、未生成の地図の作成を指示するボタンが押下されたときに、「ユーザと一緒じゃないと嫌だ」といったような、ユーザの同伴等のユーザの行動を促すメッセージまたは図形等が出力されてもよい。 Further, when the frequency of detecting that the user uses an unidentified one is equal to or less than a predetermined value, the robot 2 may be set as an individual having a parameter indicating a cowardly behavior characteristic. In the case of the robot 2 exhibiting timid behavior characteristics, when a button for instructing creation of an ungenerated map is pressed, the user's behavior such as “accompanied by the user” is not allowed. A message or a figure for prompting may be output.
 ロボット2は、他のロボット2から地図(空間データ)を取得してもよい。空間データ生成部13は、他のロボット2から取得した地図を、自己が取得した地図に連結させてもよい。この場合、他のロボットから取得した地図については、自己が取得した地図とは異なる表現で表示してもよい。このような制御方法によれば、複数のロボット2がそれぞれ空間データを生成するとき互いに知識共有ができるため、1つのロボット2だけで空間データを生成するときよりも、早期に全体の空間データを生成しうる。 Robot 2 may acquire a map (spatial data) from another robot 2. The spatial data generation unit 13 may connect a map acquired from another robot 2 to a map acquired by itself. In this case, a map acquired from another robot may be displayed in a different expression from the map acquired by itself. According to such a control method, since the plurality of robots 2 can share knowledge with each other when generating the spatial data, the entire spatial data can be obtained earlier than when the spatial data is generated by only one robot 2. Can be generated.
 更に別の変形例として、図1の空間データ生成部13は、人の動線に基づいた属性情報を設定してもよい。建物内で人が移動する軌跡は動線と言われ、動線は間取りと家具の配置により決まる。例えば、ソファーとダイニングテーブルが配置されたリビングでは、出入口となるドアからソファーまでの動線と、ドアからダイニングテーブルまでの動線の2つがある。こうした動線は、人が頻繁に通る領域であり、人の移動の邪魔にならないように障害物が取り除かれている。つまり、動線は、暗黙的に歩行の安全が確保されるべき領域であり、動線上であれば、人は無意識でも安全に移動できる。 As yet another modification, the spatial data generation unit 13 in FIG. 1 may set attribute information based on a person's flow line. The trajectory of movement of people in the building is called a flow line, and the flow line is determined by the layout and arrangement of furniture. For example, in a living room in which a sofa and a dining table are arranged, there are two flow lines: a flow line from a door serving as an entrance to the sofa and a flow line from the door to the dining table. These flow lines are areas where people frequently pass, and obstacles are removed so as not to obstruct people's movement. In other words, the flow line is an area where safety of walking should be implicitly ensured, and if it is on the flow line, a person can safely move unconsciously.
 動線と言う、歩行の安全が確保されるべき領域に、人の目線より背の低いロボットが存在すると、人はロボットに気付かず、接触する可能性がある。人の生活は、時間帯や曜日により変化し、例えば、平日の朝であれば、人はせわしなく動線を移動するし、休日の昼間であれば、人は周りを気にしながらのんびり動線を移動する。つまり、動線を移動する際に人が周囲に向ける関心も変化し、ロボットと接触する可能性も変化する。例えば、通勤や通学の時間帯に、動線上にロボットが位置すると、休日の昼間に比べて人と接触する可能性は高くなる。また、平日の日中であれば、家にいる人の数が少なくなるので、接触する可能性は低くなる。 If there is a robot whose height is lower than the human eye in the area where the safety of walking is to be ensured, the human may not be aware of the robot and may come into contact with it. People's lives vary depending on the time of day and day of the week.For example, on weekday mornings, people move around the flow line without difficulty, and during the daytime on holidays, people take a leisurely flow line while worrying about their surroundings. Moving. That is, when moving along the flow line, the interest that a person turns to the surroundings also changes, and the possibility of contact with the robot also changes. For example, if the robot is positioned on the flow line during commuting or school, the possibility of contact with a person is higher than during the daytime on holidays. Also, during the daytime on weekdays, the number of people at home is reduced, so the possibility of contact is reduced.
 自律的に行動するロボット2が、人の動線を推定して、当該推定された動線から離れるように行動することで、人との接触を避けることができる。ロボットが人の動線を把握することで、ロボットは人の動線を避けて移動できるようになる。例えば、ロボット2は、間取りと家具の大きさ及び種別と家具の位置等の空間から取得された情報に基づいて人の動線を推定し、当該推定された人の動線から所定の範囲内には入らないという条件に従ってロボットが行動すれば、人との接触を避けやすくなる。 The robot 2 acting autonomously estimates the flow line of the person and moves away from the estimated flow line, thereby avoiding contact with the person. When the robot grasps the human flow line, the robot can move while avoiding the human flow line. For example, the robot 2 estimates a human flow line based on information acquired from a space such as a floor plan, the size and type of furniture, and the position of the furniture, and is within a predetermined range from the estimated human flow line. If the robot behaves according to the condition that it does not enter, it will be easier to avoid contact with people.
 空間データ生成部13は、各時間帯における人の存在数若しくは移動軌跡、または、物の存在数若しくは移動軌跡に応じて動線を示す属性情報を設定してもよい。
 例えば、過去のデータからある時間帯で人の移動軌跡と重なる空間(以下、「動線領域」とよぶ)であることが示されている場合には、当該時間帯において、指示部113は、当該動線領域へのロボット2の進入を禁止する属性情報(当該動線領域からクリアランスを30cm以上とするなどの属性情報)を設定してもよい。ロボット2は、動線領域を行動時の制約に加え、移動機構29を制御して移動する。動線領域における制約は、ロボット2の動線領域への進入を禁止するという制約であってもよいし、動線領域へ進入しても良いが動線領域に所定の時間以上停止してはいけないという制約であってもよい。また、動線領域では、人の移動が優先されるとして、ロボット2が人の存在を認識した場合に、動線領域から離れるように移動方向が制御されてもよい。つまり、動線領域では、常に行動制約が課されるとは限らず、人との相対的な位置関係に応じて行動制約が課されてもよい。
The spatial data generation unit 13 may set attribute information indicating a flow line according to the number or movement trajectory of people in each time zone or the number or movement trajectory of objects.
For example, when it is indicated from the past data that the space overlaps with the movement locus of the person in a certain time zone (hereinafter referred to as “flow line region”), the instruction unit 113 Attribute information that prohibits the robot 2 from entering the flow line area (attribute information such as a clearance of 30 cm or more from the flow line area) may be set. The robot 2 moves by controlling the moving mechanism 29 in addition to the flow line area in addition to the restriction at the time of action. The restriction in the flow line area may be a restriction that prohibits the robot 2 from entering the flow line area, or may enter the flow line area, but stop in the flow line area for a predetermined time or more. It may be a restriction that it should not. Further, in the flow line area, assuming that the movement of a person is given priority, the movement direction may be controlled so as to leave the flow line area when the robot 2 recognizes the presence of the person. That is, in the flow line region, behavior restrictions are not always imposed, and behavior restrictions may be imposed according to a relative positional relationship with a person.
 例えば、過去のデータに、午前6:00~午前7:00にキッチンの全範囲で人の移動が記録されている場合(キッチンと人の移動軌跡が所定数または所定確率以上重なっている場合)には、午前6:00~午前7:00において、キッチンへの進入を禁止する属性情報が設定されてもよい。料理などでキッチンを使う時間帯にロボット2が進入せず、逆に、キッチンが忙しくない時間帯にはロボット2が進入するということにユーザが気づけば、ユーザに対し、ロボット2が邪魔にならないだけでなく、ロボット2がユーザらの状況をわきまえて行動しているという賢さを感じさせることができる。 For example, in the past data, when the movement of a person is recorded in the entire range of the kitchen from 6:00 am to 7:00 am (when the movement trajectory of the kitchen and the person overlaps a predetermined number or more than a predetermined probability) The attribute information for prohibiting entry into the kitchen may be set from 6:00 am to 7:00 am. If the user notices that the robot 2 does not enter when the kitchen is used for cooking or the like, and the robot 2 enters when the kitchen is not busy, the robot 2 does not interfere with the user. In addition, it is possible to feel the wisdom that the robot 2 is acting in accordance with the user's situation.
 これらに代えて又は加えて、各時間帯における人の存在数若しくは移動軌跡、または、物の存在数若しくは移動軌跡に応じて、指定された空間において所定時間以上の滞在を禁止する属性情報が設定されてもよい。例えば、過去のデータに、午前6:00~午前7:00にキッチンの全範囲で人の移動が記録されている場合には、午前6:00~午前7:00においてキッチンへ所定時間(例えば3分)以上の滞在を禁止する属性情報が設定されてもよい。これに代えて、ロボット2がキッチン又はキッチンから所定距離内で人を検出した場合に、ロボット2は、キッチンへの進入を回避してもよいし、または、所定時間以上の滞在を回避してもよい。 Instead of or in addition to these, attribute information prohibiting staying for a predetermined time or longer in a specified space is set according to the number or movement trajectory of people in each time zone or the number or movement trajectory of objects. May be. For example, in the past data, when movement of a person is recorded in the entire range of the kitchen from 6:00 am to 7:00 am, the predetermined time (eg, 6:00 am to 7:00 am) (3 minutes) Attribute information for prohibiting the stay may be set. Alternatively, when the robot 2 detects a person within a predetermined distance from the kitchen or the kitchen, the robot 2 may avoid entering the kitchen, or avoid staying for a predetermined time or longer. Also good.
 また、空間における視認性及び人の存在数若しくは移動軌跡または物の存在数若しくは移動軌跡に応じて属性情報が設定されてもよい。これらに代えて又は加えて、空間における視認性及び人の存在数若しくは移動軌跡または物の存在数若しくは移動軌跡に応じて所定時間以上の滞在を禁止する属性情報が設定されてもよい。 Also, attribute information may be set according to the visibility in space and the number or movement trajectory of people or the number or movement trajectory of objects. Instead of or in addition to these, attribute information that prohibits staying for a predetermined time or longer may be set according to the visibility in space and the number or movement trajectory of people, or the number or movement trajectory of objects.
 例えば、夜間で照明が消されているような、空間の視認性を示す空間の明るさが所定以下の環境において、過去のデータから人の移動軌跡と重なる動線領域であることが示されている場合には、当該動線領域へのロボット2の進入を禁止する属性情報(当該動線領域からクリアランスを30cm以上とするなどの属性情報)が設定されてもよい。このような制御方法によれば、暗闇などの視認性が低い空間において人とロボット2と接触してしまうリスクを低減できる。
 人の存在数、移動軌跡、物の存在数及び移動軌跡は撮像画像又は温度センサの出力データのようにセンサの過去の出力データに基づいて認識されうる。
For example, in an environment where the brightness of the space showing the visibility of the space is turned off at night, such as when the lighting is turned off at night, it is indicated from the past data that it is a flow line region that overlaps with the movement locus of the person. If there is, attribute information for prohibiting the robot 2 from entering the flow line area (attribute information such as a clearance of 30 cm or more from the flow line area) may be set. According to such a control method, it is possible to reduce the risk of contact between the human and the robot 2 in a space with low visibility such as darkness.
The number of people present, the movement trajectory, the number of objects present, and the movement trajectory can be recognized based on past output data of the sensor such as a captured image or output data of the temperature sensor.
 更に別の変形例として、動線領域は、図1の利用者端末3を介してユーザから指定されることで設定されてもよい。例えば、図6に示した可視化画像に、ユーザが時間帯と動線を書き込むことで指定できる。更に具体的には、可視化画像が表示された利用者端末3の画面に、指先を滑らせることで動線を指定する。指先の動きに連動して、間取りに重ねた動線が描画される。ユーザに指定された動線は、図1の指定取得部162において取得される。 As yet another modification, the flow line area may be set by being designated by the user via the user terminal 3 of FIG. For example, the user can specify the time zone and the flow line in the visualized image shown in FIG. More specifically, the flow line is designated by sliding the fingertip on the screen of the user terminal 3 on which the visualized image is displayed. A flow line superimposed on the floor plan is drawn in conjunction with the movement of the fingertip. The flow line designated by the user is obtained by the designation obtaining unit 162 in FIG.
 更に別の変形例として、所定条件が充足したときにロボット2を移動させるように、例えば利用者端末3から受信したデータに基づいて、可視化データを表す画面(図7、図8参照)においてスポットの設定をしてもよい。例えば、ユーザは、利用者端末3に表示された画面で玄関付近をタップすることによってスポットを指定しうる。
 所定条件は、例えば、利用者端末3の位置情報などのユーザの位置を直接的または間接的に示す情報からユーザが外出していることが検出された後、ロボット2が存在する空間に向けて移動していることが検出されるという条件であってもよい。また、所定条件は、利用者端末3の位置情報などのユーザの位置を直接的または間接的に示す情報からユーザが所定のスポットに向かって移動していることが検出されたという条件であってもよい。 また、このような指定を検出した可視化データ提供部161は、スポットに対応するマークを表示させてもよい。
As yet another modification, a spot is displayed on a screen representing visualization data (see FIGS. 7 and 8) based on data received from the user terminal 3, for example, so as to move the robot 2 when a predetermined condition is satisfied. May be set. For example, the user can designate a spot by tapping near the entrance on the screen displayed on the user terminal 3.
The predetermined condition is, for example, after detecting that the user has gone out from information indicating the user's position directly or indirectly, such as the position information of the user terminal 3, and then toward the space where the robot 2 exists. It may be a condition that it is detected that it is moving. Further, the predetermined condition is a condition that it is detected that the user is moving toward a predetermined spot from information indicating the user's position directly or indirectly, such as the position information of the user terminal 3. Also good. Further, the visualization data providing unit 161 that has detected such designation may display a mark corresponding to the spot.
 タップされた位置に基づいて、空間データにおいてスポットを示す座標情報が特定される。スポットの属性情報として、この座標情報が記憶される。さらに、スポット到着前又は到着後のロボット2の向きが併せて設定されてもよい。この場合、ロボット2の向きは、空間データの空間要素に応じて自動的に設定されてもよい(例えば、ドアの中心方向を向くように設定されてもよい)し、利用者端末3を介してユーザにより指定されてもよい。この場合、画面に出力されるスポットに対応するマークがロボットの向きを表すデザインであれば、当該マークの角度によってロボットの向きをユーザに認識させることができる。当該マークとともに、矢印や三角などの方向マークを表示して、スポット到着前後のロボット2の向きを表現してもよい。たとえば、ユーザが右回転あるいは左回転を指示するボタンをタッチすると、マークが回転し、あるいは方向マークの向きが変更されてもよい。このように視覚化されたロボットの向きも、スポットの属性情報として記憶される。 Based on the tapped position, coordinate information indicating the spot is specified in the spatial data. This coordinate information is stored as spot attribute information. Furthermore, the direction of the robot 2 before or after the arrival of the spot may be set together. In this case, the orientation of the robot 2 may be automatically set according to the spatial element of the spatial data (for example, may be set to face the center direction of the door), or via the user terminal 3. May be specified by the user. In this case, if the mark corresponding to the spot output on the screen is a design representing the direction of the robot, the user can recognize the direction of the robot by the angle of the mark. A direction mark such as an arrow or a triangle may be displayed together with the mark to express the direction of the robot 2 before and after the spot arrival. For example, when the user touches a button for instructing right rotation or left rotation, the mark may be rotated or the direction of the direction mark may be changed. The orientation of the robot visualized in this way is also stored as spot attribute information.
 自律行動型ロボット1は、たとえばユーザの利用者端末3(例えば、ユーザが携帯したスマートフォン)から発せられる無線信号を受信したときに、スポット移動イベントを発生させてもよい。スポット移動イベントが発生すると、自律行動型ロボット1はロボット2の現在位置からスポットまでの経路を特定する。ロボット2は、移動機構29を制御して特定された経路を辿り、スポットまで移動し、設定されている向きに姿勢を合せる。なお、地図が完成していない段階あるいは地図の完成度が低い段階では、スポットの設定を受け付けないようにしてもよい。 The autonomous behavior robot 1 may generate a spot movement event when receiving a wireless signal emitted from, for example, a user terminal 3 of a user (for example, a smartphone carried by the user). When a spot movement event occurs, the autonomous behavior robot 1 specifies a route from the current position of the robot 2 to the spot. The robot 2 controls the moving mechanism 29 to follow the specified route, moves to the spot, and adjusts the posture in the set direction. Note that the spot setting may not be accepted at a stage where the map is not completed or at a stage where the degree of completion of the map is low.
 たとえば、自律行動型ロボット1は、ユーザの利用者端末3(例えば、スマートフォン)の位置をGPS(Global Positioning System)から取得し、当該位置が所定範囲の外(例えば、ユーザの家の外)から所定範囲内(例えばユーザの家)に移動したとき、ロボット2に指示してスポットへの移動をさせてもよい。あるいは、自宅玄関に設置されるビーコンに対して、ユーザの利用者端末3(例えば、スマートフォン)が応答信号を返したときに、スポット移動イベントを発生させてもよい。 For example, the autonomous behavior robot 1 acquires the position of the user terminal 3 of the user (for example, a smartphone) from GPS (Global Positioning System), and the position is outside a predetermined range (for example, outside the user's house). When moving within a predetermined range (for example, the user's house), the robot 2 may be instructed to move to a spot. Or when a user's user terminal 3 (for example, smart phone) returns a response signal with respect to the beacon installed in a house entrance, you may generate | occur | produce a spot movement event.
 更に別の変形例として、可視化データを表す画面(図7、図8参照)を介した指示により、ロボット2の移動が制御されてもよい。ユーザは、当該画面においてロボット2に移動させたい場所をタップすることによって移動目的地を指定する。移動目的地が設定されると、可視化データの画面に移動目的地マークが表示される。すぐに、自律行動型ロボット1はロボット2の現在位置から移動目的地までの経路を特定する。ロボット2は、移動機構29を制御して経路を辿り、移動目的地まで移動する。上述した例と同様に、ロボット2の向きを設定してもよい。なお、地図が完成していない段階あるいは地図の完成度が低い段階では、移動目的地の指示を受け付けないようにしてもよい。一部の部屋のみ地図が生成されている場合には、その部屋の中に限って移動目的地の指示を受け付けてもよい。 As yet another modification, the movement of the robot 2 may be controlled by an instruction via a screen representing the visualization data (see FIGS. 7 and 8). The user designates a moving destination by tapping a place to be moved by the robot 2 on the screen. When the movement destination is set, the movement destination mark is displayed on the screen of the visualization data. Immediately, the autonomous behavior type robot 1 specifies a route from the current position of the robot 2 to the moving destination. The robot 2 follows the route by controlling the moving mechanism 29 and moves to the moving destination. Similar to the example described above, the orientation of the robot 2 may be set. It should be noted that an instruction for a movement destination may not be accepted when the map is not completed or when the map is not complete. When the map is generated only in some rooms, the movement destination instruction may be received only in the room.
 更に別の変形例として、可視化データを表す画面(図7、図8参照)において、ユーザによる、ロボット2に立ち入られたくないエリアを囲む操作を検出したことを条件として、ロボット2の進入禁止範囲を設定してもよい。ロボット2は、進入禁止範囲を避けた経路を通るように経路を探索し、探索された経路に従って移動する。 As yet another modification, the entry prohibition range of the robot 2 on the condition that the operation surrounding the area where the user does not want to enter the robot 2 is detected on the screen representing the visualization data (see FIGS. 7 and 8). May be set. The robot 2 searches for a route through a route avoiding the entry prohibition range, and moves according to the searched route.
 たとえば、ユーザは、ロボット2に対し、寝室への進入を禁止したいと考えたとする。この場合、図7に示す可視化データの画面において、ユーザは、指で寝室に該当する領域を囲む操作を行う。空間データ生成部13は、利用者端末3で検出された画面に描かれた閉領域に対応する空間を「進入禁止範囲」として設定して、空間データを生成する。このような制御方法によれば、ユーザはスマートフォンの可視化データ画面に対して、直感的な操作にてロボットの進入禁止範囲を設定できる。 For example, assume that the user wishes to prohibit the robot 2 from entering the bedroom. In this case, on the visualization data screen illustrated in FIG. 7, the user performs an operation of surrounding an area corresponding to the bedroom with a finger. The spatial data generation unit 13 sets a space corresponding to the closed area drawn on the screen detected by the user terminal 3 as an “entry prohibition range”, and generates spatial data. According to such a control method, the user can set the robot entry prohibition range with an intuitive operation on the visualization data screen of the smartphone.
 図3に示した点群データ生成(S12)および空間データ生成(S13)の処理において、自己位置の推定と環境地図の作成を同時に行うSLAM(Simultaneous Localization and Mapping)技術を適用してもよい。SLAM技術については、自走式掃除機やドローンなどの応用例が知られている。自律行動型ロボット1では3次元の空間を表す空間データを生成する点で、平面の地図を生成する自走式掃除機のSLAM技術とは異なる。また、自律行動型ロボット1は壁面と天井で囲まれた閉空間を表す空間データを生成するので、野外で使用され、閉じられていない空間データを生成するドローンのSLAM技術とも異なる。閉空間では、壁面と天井の特徴も活用できるので、壁面や天井のない野外の場合よりも自己位置を推定しやすい。また、閉空間に含まれる特徴点の数は限られるので、無限に広がる野外空間の場合に比べて、現在撮影した画像に含まれる特徴点とのマッチングによる検証回数が少ない。したがって、推定の確度が高く、推定結果が算出されるまでの時間が短いというメリットがある。 In the process of point cloud data generation (S12) and spatial data generation (S13) shown in FIG. 3, a SLAM (Simultaneous Localization and Mapping) technique that simultaneously estimates self-location and creates an environmental map may be applied. As for the SLAM technology, application examples such as a self-propelled vacuum cleaner and a drone are known. The autonomous behavior robot 1 is different from the SLAM technology of a self-propelled cleaner that generates a plane map in that it generates spatial data representing a three-dimensional space. Further, since the autonomous behavior robot 1 generates spatial data representing a closed space surrounded by a wall surface and a ceiling, the autonomous behavior robot 1 is different from the SLAM technology of a drone that is used outdoors and generates spatial data that is not closed. In a closed space, the characteristics of the wall and ceiling can be used, so it is easier to estimate the self-position than in the case of outdoors without walls and ceilings. In addition, since the number of feature points included in the closed space is limited, the number of verifications by matching with feature points included in the currently photographed image is smaller than in the case of an infinitely wide outdoor space. Therefore, there are advantages that the estimation accuracy is high and the time until the estimation result is calculated is short.

Claims (12)

  1.  移動機構と、
     周囲の空間を撮影する撮影部と、
     前記撮影部において撮影された撮影画像に基づいて、前記空間を認識した空間データを生成する空間データ生成部と、
     生成された前記空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成部と、
     生成された前記可視化データを提供する可視化データ提供部と
     を備える、自律行動型ロボット。
    A moving mechanism;
    A shooting section for shooting the surrounding space;
    A spatial data generation unit that generates spatial data recognizing the space based on a captured image captured by the imaging unit;
    A visualization data generation unit that generates visualization data that visualizes the spatial elements included in the space, based on the generated spatial data;
    An autonomous behavior type robot comprising: a visualization data providing unit that provides the generated visualization data.
  2.  提供された前記可視化データに含まれる領域の指定を取得する指示取得部を更に備え、
     前記空間データ生成部は、取得された前記指定に係る領域における空間を認識した空間データを再生成する、請求項1に記載の自律行動型ロボット。
    An instruction obtaining unit for obtaining designation of an area included in the provided visualization data;
    The autonomous behavior robot according to claim 1, wherein the spatial data generation unit regenerates spatial data recognizing a space in the acquired area corresponding to the designation.
  3.  前記可視化データ生成部は、前記撮影部において撮影された撮影画像から認識された空間要素に基づき、認識された前記空間要素の特徴を反映した前記可視化データを生成する、請求項1または2に記載の自律行動型ロボット。 The said visualization data generation part produces | generates the said visualization data reflecting the characteristic of the recognized said spatial element based on the spatial element recognized from the picked-up image image | photographed in the said imaging | photography part. Autonomous robot.
  4.  前記可視化データ生成部は、前記撮影部において撮影された撮影画像の色情報に基づき、前記空間要素に色の属性を付与した前記可視化データを生成する、請求項1から3のいずれか一項に記載の自律行動型ロボット。 The said visualization data production | generation part produces | generates the said visualization data which provided the attribute of the color to the said space element based on the color information of the picked-up image image | photographed in the said imaging | photography part. The autonomous behavior type robot described.
  5.  前記可視化データ生成部は、固定されている空間要素と移動可能な空間要素とを区別した前記可視化データを生成する、請求項1から4のいずれか一項に記載の自律行動型ロボット。 The autonomous behavior type robot according to any one of claims 1 to 4, wherein the visualization data generation unit generates the visualization data in which a fixed spatial element and a movable spatial element are distinguished.
  6.  前記可視化データ生成部は、前記空間データの経時的な変化に基づき、前記固定されている空間要素と前記移動可能な空間要素とを区別した前記可視化データを生成する、請求項5に記載の自律行動型ロボット。 The autonomous data generation unit according to claim 5, wherein the visualization data generation unit generates the visualization data in which the fixed spatial element and the movable spatial element are distinguished based on a temporal change of the spatial data. Behavioral robot.
  7.  前記可視化データ生成部は、前記移動機構において移動した位置から所定の範囲に含まれる前記空間データを可視化した前記可視化データを生成する、請求項1から6のいずれか一項に記載の自律行動型ロボット。 The autonomous behavior type according to any one of claims 1 to 6, wherein the visualization data generation unit generates the visualization data obtained by visualizing the spatial data included in a predetermined range from a position moved by the moving mechanism. robot.
  8.  前記撮影部において撮影された撮影画像に基づき三次元の点群データを生成する点群データ生成部をさらに備え、
     前記空間データ生成部は、生成された前記点群データに基づき前記空間データを生成する、請求項1から7のいずれか一項に記載の自律行動型ロボット。
    A point cloud data generation unit that generates three-dimensional point cloud data based on the captured image captured by the imaging unit;
    The autonomous behavior robot according to any one of claims 1 to 7, wherein the spatial data generation unit generates the spatial data based on the generated point cloud data.
  9.  前記空間データ生成部は、前記点群データにおいて前記空間要素の輪郭を特定することにより前記空間データを生成し、
     前記可視化データ生成部は、特定された前記輪郭に基づき前記空間要素を可視化した前記可視化データを生成する、請求項8に記載の自律行動型ロボット。
    The spatial data generation unit generates the spatial data by specifying an outline of the spatial element in the point cloud data,
    The autonomous behavior robot according to claim 8, wherein the visualization data generation unit generates the visualization data obtained by visualizing the space element based on the identified outline.
  10.  ロボットが空間を認識することで得られる空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成部と、
     生成された前記可視化データを提供する可視化データ提供部と
     を備える、データ提供装置。
    A visualization data generation unit that generates visualization data that visualizes the spatial elements included in the space, based on the spatial data obtained by the robot recognizing the space;
    A data providing apparatus comprising: a visualization data providing unit that provides the generated visualization data.
  11.  提供された前記可視化データに含まれる領域の指定を取得する指定取得部と、
     前記ロボットに対して、取得された前記指定に係る領域における前記空間の認識を指示する指示部と
     をさらに備える、請求項10に記載のデータ提供装置。
    A designation obtaining unit for obtaining designation of an area included in the provided visualization data;
    The data providing apparatus according to claim 10, further comprising: an instruction unit that instructs the robot to recognize the space in the acquired area related to the designation.
  12.  コンピュータに、
     ロボットが空間を認識することで得られる空間データに基づいて、前記空間に含まれる空間要素を可視化した可視化データを生成する可視化データ生成機能と、
     生成された前記可視化データを提供する可視化データ提供機能と
     を実現させるための、データ提供プログラム。
    On the computer,
    A visualization data generation function for generating visualization data that visualizes spatial elements included in the space, based on spatial data obtained by the robot recognizing the space;
    A data providing program for realizing a visualization data providing function for providing the generated visualization data.
PCT/JP2019/011814 2018-03-27 2019-03-20 Autonomous action robot, data supply device, and data supply program WO2019188697A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-060937 2018-03-27
JP2018060937 2018-03-27

Publications (1)

Publication Number Publication Date
WO2019188697A1 true WO2019188697A1 (en) 2019-10-03

Family

ID=68059941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/011814 WO2019188697A1 (en) 2018-03-27 2019-03-20 Autonomous action robot, data supply device, and data supply program

Country Status (1)

Country Link
WO (1) WO2019188697A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002288637A (en) * 2001-03-23 2002-10-04 Honda Motor Co Ltd Environmental information forming method
JP2006268345A (en) * 2005-03-23 2006-10-05 Toshiba Corp Image processing device and image processing method
JP2017041200A (en) * 2015-08-21 2017-02-23 シャープ株式会社 Autonomous mobile device, autonomous mobile system and circumstance map evaluation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002288637A (en) * 2001-03-23 2002-10-04 Honda Motor Co Ltd Environmental information forming method
JP2006268345A (en) * 2005-03-23 2006-10-05 Toshiba Corp Image processing device and image processing method
JP2017041200A (en) * 2015-08-21 2017-02-23 シャープ株式会社 Autonomous mobile device, autonomous mobile system and circumstance map evaluation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAEYAMA, SHOICHI ET AL., PROCEEDINGS OF THE 2001 ANNUAL CONFERENCE OF THE ROBOTICS SOCIETY OF JAPAN, vol. 20 01, September 2001 (2001-09-01), pages 647, 648 *

Similar Documents

Publication Publication Date Title
CN113284240B (en) Map construction method and device, electronic equipment and storage medium
RU2708287C1 (en) Method and device for drawing room layout
WO2019240208A1 (en) Robot, method for controlling robot, and program
CN105074691A (en) Context aware localization, mapping, and tracking
CN104284064B (en) Method and apparatus for preview twin-lens image
JP2021522564A (en) Systems and methods for detecting human gaze and gestures in an unconstrained environment
CN111448568B (en) Environment-based application presentation
US9697869B2 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
DE112015002463T5 (en) Systems and methods for gestural interacting in an existing computer environment
US11657085B1 (en) Optical devices and apparatuses for capturing, structuring, and using interlinked multi-directional still pictures and/or multi-directional motion pictures
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
JP2018142090A (en) Character image generating device, character image generating method, program, recording medium and character image generating system
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
EP3398029B1 (en) Intelligent smart room control system
CN106873300B (en) Virtual space projection method and device for intelligent robot
WO2020022371A1 (en) Robot, method for controlling robot, and control program
CN113116224A (en) Robot and control method thereof
CN114416244A (en) Information display method and device, electronic equipment and storage medium
JP6651086B1 (en) Image analysis program, information processing terminal, and image analysis system
JP2021177144A (en) Information processing apparatus, information processing method, and program
CN112396997B (en) Intelligent interactive system for shadow sand table
WO2019188697A1 (en) Autonomous action robot, data supply device, and data supply program
CN115830280A (en) Data processing method and device, electronic equipment and storage medium
KR20200041877A (en) Information processing device, information processing method and program
JP2024094366A (en) Robot, and control method and control program thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19775688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19775688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP