WO2021176863A1 - Object image display device, object image display system, and object image display program - Google Patents

Object image display device, object image display system, and object image display program Download PDF

Info

Publication number
WO2021176863A1
WO2021176863A1 PCT/JP2021/001559 JP2021001559W WO2021176863A1 WO 2021176863 A1 WO2021176863 A1 WO 2021176863A1 JP 2021001559 W JP2021001559 W JP 2021001559W WO 2021176863 A1 WO2021176863 A1 WO 2021176863A1
Authority
WO
WIPO (PCT)
Prior art keywords
drawing information
information
object image
unit
image display
Prior art date
Application number
PCT/JP2021/001559
Other languages
French (fr)
Japanese (ja)
Inventor
泰成 井口
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2021176863A1 publication Critical patent/WO2021176863A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD

Definitions

  • the present disclosure relates to an object image display device, an object image display system, and an object image display program that display an object layout in a predetermined space.
  • the distance to the wall is measured by hand based only on the viewpoint of the robot moving in the house and the information such as furniture. It was necessary to confirm in advance whether the size was sufficient to fit in the house. In other words, there is a desire to determine whether furniture can be installed in a house before purchasing.
  • the present disclosure provides an object image display device, an object image display system, and an object image display program capable of displaying the layout of an object in space.
  • the object image display device of the present disclosure relates to the first drawing information created based on the measurement by a mobile robot that autonomously moves in a predetermined space, and the space interior created before the creation of the first drawing information.
  • It has a drawing information acquisition unit for acquiring drawing information. Further, it has an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information. Further, the object image display device has a display unit that displays a superimposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the first drawing information and the second drawing information.
  • the object image display system of the present disclosure includes a mobile robot that autonomously moves in a predetermined space, a terminal device that can communicate with the mobile robot, first drawing information created based on measurement by the mobile robot, and a first drawing information.
  • a drawing information acquisition unit for acquiring a second drawing information regarding the inside of the space created before the creation of the drawing information is provided.
  • the object image display system has an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information, the first drawing information, and the object image display system.
  • a display unit for displaying a superimposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the second drawing information is provided.
  • the object image display program of the present disclosure relates to the first drawing information created based on the measurement by a mobile robot that autonomously moves in a predetermined space, and the first drawing information created before the creation of the first drawing information.
  • the object image display program includes an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information, and the first drawing information and the first drawing information.
  • a computer is made to execute a display unit that displays a superposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the drawing information.
  • an object image display device an object image display system, and an object image display program that can output and display a layout showing the size and arrangement state of an object installed in a space on a screen.
  • FIG. 1 is a diagram showing an object image display system according to an embodiment together with space.
  • FIG. 2 is a side view showing the appearance of the mobile robot according to the embodiment.
  • FIG. 3 is a bottom view showing the appearance of the mobile robot according to the embodiment.
  • FIG. 4 is a block diagram showing each functional unit of the mobile robot system according to the embodiment.
  • FIG. 5 is a diagram showing a display device on which a spatial image, an object image, and the like are displayed.
  • FIG. 6 is a diagram showing a display image of the object movement mode state.
  • FIG. 7 is a diagram showing a display image of the measurement mode state.
  • FIG. 8 is a diagram showing a display image of the object integrated mode state.
  • FIG. 9 is a diagram showing a display image in the additional mode state.
  • FIG. 10 is a diagram showing an object creation screen.
  • FIG. 11 is a diagram showing a state in which a new object image is added to the superimposed image.
  • drawings are schematic views in which emphasis, omission, and ratio are adjusted as appropriate to show the present disclosure, and may differ from the actual shape, positional relationship, and ratio.
  • FIG. 1 is a plan view showing a space in which the mobile robot 130 included in the object image display system 100 according to the embodiment moves together with the mobile robot 130.
  • the space in which the mobile robot 130 moves is the residence 200. Then, the mobile robot 130 autonomously moves on the floor 201 of the residence 200.
  • the dwelling 200 is composed of a space surrounded by a wall 202 or the like, and has a protruding portion 203, a door portion 204, or the like in a part thereof.
  • objects such as furniture and home appliances (for example, first object 211, second object 212, third object 213, fourth object 214, and fifth object 215 are exemplified. ) Is placed.
  • the object image display system 100 of the present embodiment includes a mobile robot 130, a terminal device 160, and the like.
  • the object image display device 120 is realized by executing the object image display program on the terminal device 160 (see FIG. 4).
  • FIG. 2 is a side view showing the appearance of the mobile robot 130 according to the present embodiment.
  • FIG. 3 is a bottom view showing the appearance of the mobile robot 130 according to the present embodiment.
  • the mobile robot 130 of the present embodiment is a cleaning robot that performs cleaning while autonomously traveling on the floor 201. That is, the mobile robot 130 autonomously moves in the house 200 while sucking dust and the like existing on the floor 201 for cleaning.
  • the mobile robot 130 of the present embodiment includes a body 131 on which various components are mounted, a drive unit 132, a cleaning unit 134, a suction unit 133, a control unit 135, a positional relationship detection unit 136, and the like. ..
  • the drive unit 132 moves the body 131.
  • the cleaning unit 134 collects dust and the like existing on the floor 201.
  • the suction unit 133 sucks dust into the body 131.
  • the control unit 135 controls the drive unit 132, the cleaning unit 134, the suction unit 133, and the like.
  • the body 131 constitutes a housing for accommodating the drive unit 132, the control unit 135, and the like.
  • the upper part of the body 131 is removable with respect to the lower part of the body 131.
  • the body 131 includes a bumper 139 that is attached to the outer peripheral portion and is displaceable with respect to the body 131.
  • the body 131 has a suction port 138 formed at the lower portion for sucking dust into the body 131.
  • the drive unit 132 is a device for running the mobile robot 130 based on an instruction from the control unit 135.
  • the drive unit 132 includes a wheel 140 traveling on the cleaning surface of the floor 201, a traveling motor (not shown) for applying torque to the wheel 140, a housing 141 for accommodating the traveling motor, and the like.
  • the body 131 has casters 142 provided on the bottom surface and functioning as training wheels. Then, the mobile robot 130 independently controls the rotations of the two wheels 140. As a result, the mobile robot 130 can freely travel in a straight line, backward, left turn, right turn, and the like.
  • the cleaning unit 134 constitutes a unit for sweeping dust on the floor 201 and sucking dust from the suction port 138.
  • the cleaning unit 134 includes a rotating brush arranged in the vicinity of the suction port 138, a brush drive motor for rotating the rotating brush, and the like.
  • the suction unit 133 constitutes a unit that sucks dust from the suction port 138 and holds the sucked dust inside the body 131.
  • the suction unit 133 includes an electric fan (not shown), a dust holding portion 143, and the like.
  • the electric fan sucks the air inside the dust holding portion 143 and discharges the air to the outside of the body 131. As a result, the electric fan sucks dust from the suction port 138 and collects the sucked dust into the dust holding portion 143.
  • the positional relationship detection unit 136 is a device that detects the positional relationship between the mobile robot 130, the wall 202, an object, and the like on the floor 201.
  • the positional relationship detection unit 136 detects the direction, distance, and the like of objects such as walls 202 and furniture existing around the body 131, and acquires 2.5-dimensional information.
  • the mobile robot 130 of the present embodiment can also grasp its own position from the information of the direction and the distance detected by the positional relationship detection unit 136.
  • the type of the positional relationship detection unit 136 is not particularly limited, but for example, LiDAR (Light Detection and Ranking) that detects the position and the distance based on the light that radiates light and is reflected by an obstacle and returned.
  • LiDAR Light Detection and Ranking
  • ToF (Time of Light) camera and the like can be exemplified.
  • the positional relationship detection unit 136 a compound eye camera or the like that acquires illumination light or natural light reflected by an obstacle as an image and acquires a position and a distance based on parallax can be exemplified.
  • the mobile robot 130 is provided with a LiDAR on the upper portion of the body 131 and a camera on the side portion as the positional relationship detection unit 136.
  • the mobile robot 130 may be provided with another sensor in addition to the positional relationship detection unit 136.
  • a floor surface sensor that is arranged at a plurality of locations on the bottom surface of the body 131 and detects whether or not the floor 201 is present may be provided.
  • the drive unit 132 may be provided with an encoder for detecting the respective rotation angles of the pair of wheels 140 rotated by the traveling motor.
  • an acceleration sensor that detects the acceleration when the mobile robot 130 travels and an angular velocity sensor that detects the angular velocity when the mobile robot 130 turns may be provided.
  • a dust amount sensor that measures the amount of dust accumulated on the floor surface may be provided.
  • a contact sensor may be provided which detects the displacement of the bumper 139 and detects that an obstacle has collided.
  • an obstacle sensor such as an ultrasonic sensor may be provided other than the positional relationship detection unit 136 that detects an obstacle existing in front of the body 131.
  • the terminal device 160 (see FIG. 1) is composed of a computer or the like that can realize various functions by executing a program.
  • the terminal device 160 is, for example, a device called a smartphone or a tablet terminal.
  • the terminal device 160 can exchange various information with the mobile robot 130 by communication.
  • the terminal device 160 includes a display device 161 (see FIG. 5) and an input device 162 (see FIG. 4).
  • the input device 162 is arranged (displayed) on the surface of the display device 161 and is composed of a touch sensor or the like that two-dimensionally detects contact with fingers or the like.
  • FIG. 4 is a block diagram showing each functional unit of the object image display system 100 according to the present embodiment.
  • the object image display system 100 of the present embodiment includes a drawing information acquisition unit 171, an object separation unit 173, a display unit 174, and the like as functional units of the object image display device 120.
  • the object image display system 100 includes a positional relationship acquisition unit 172 and a drawing information creation unit 175 as functional units of the mobile robot 130.
  • the object image display system 100 includes an object image moving unit 177, a distance display unit 178, an object information integration unit 179, an object information acquisition unit 180, and an input unit 181 as functional units of the object image display device 120. Be prepared.
  • the positional relationship acquisition unit 172 of the mobile robot 130 acquires the positional relationship between the mobile robot 130 and an object such as furniture or a wall on the floor 201 from the positional relationship detection unit 136.
  • the positional relationship acquisition unit 172 acquires the positional relationship between the self-position of the mobile robot 130 and an object or the like by calculation from the measurement data of at least one of the LiDAR, which is the positional relationship detection unit 136, and the camera. do.
  • the drawing information creation unit 175 of the mobile robot 130 creates drawing information based on the positional relationship between the mobile robot 130 acquired by the positional relationship acquisition unit 172 and an object or a wall.
  • the method of creating the drawing information is not particularly limited, but in the case of the present embodiment, the drawing information is created by the SLAM (Simultaneus Localization and Mapping) technique.
  • the created drawing information is stored in the storage unit 176 of the object image display device 120.
  • the drawing information acquisition unit 171 of the object image display device 120 acquires drawing information which is information indicating the shape of a predetermined space or the like.
  • the drawing information acquisition unit 171 acquires at least two types of drawing information.
  • the first drawing information which is one of the drawing information, is drawing information created based on the measurement by the mobile robot 130 that autonomously moves in the house 200.
  • the second drawing information is drawing information regarding the residence 200 created before the creation of the first drawing information.
  • the location where the first drawing information is created is not particularly limited.
  • the data detected by the positional relationship detection unit 136 of the mobile robot 130 may be obtained by the server via the network 250, and the first drawing information may be created by the server or the like.
  • the first drawing information is created by the drawing information creation unit 175 of the mobile robot 130 based on the data detected by the positional relationship detection unit 136 of the mobile robot 130. Then, the drawing information acquisition unit 171 of the object image display device 120 acquires the first drawing information from the mobile robot 130.
  • the location where the second drawing information is created is not particularly limited.
  • the second drawing information may be created by a server or the like based on the data created at the time of designing the residence 200.
  • the mobile robot 130 may be run on the floor 201 of the residence 200 in which furniture or the like is not installed, and the second drawing information may be created based on the data detected by the positional relationship detection unit 136.
  • the second drawing information is preferably composed of information on the wall of the house 200 in which an object such as furniture is not installed.
  • the drawing information acquisition unit 171 acquires a so-called floor plan as the second drawing data via the network 250.
  • the object separation unit 173 is arranged in the residence 200 based on the difference between the first drawing information and the second drawing information acquired by the drawing information acquisition unit 171, for example, the first object 211 to the fifth object 215 (
  • the object information indicating each of them (see FIG. 1) is separated individually.
  • the method of separating the object information individually is not particularly limited.
  • the mobile robot 130 first travels around the third object 213. At this time, information that can be recognized as an independent individual and that does not correspond to the second drawing information is separated from the first drawing information as one of the object information.
  • each leg may be recognized as one individual based on the measurement data of the four legs of the fourth object 214.
  • the object separation unit 173 extrapolates the extracted corners to a rectangle or the like to generate object information.
  • the display device 161 of the object image display device 120 will be described with reference to FIG.
  • FIG. 5 is a diagram showing a display device 161 on which a spatial image, an object image, and the like are displayed.
  • the object separation unit 173 is included in the spatial image 301 corresponding to at least one of the first drawing information acquired by the drawing information acquisition unit 171 and the second drawing information.
  • the superimposed image 300 on which the object image 310 corresponding to the separated object information is superimposed is displayed on the display device 161.
  • the input unit 181 of the object image display device 120 processes the signal input from the input device 162 included in the terminal device 160.
  • the input device 162 includes a touch panel as described above. Therefore, specifically, the input unit 181 accepts user operations such as “tap”, “drag”, and "pinch out” on the touch panel.
  • the object image moving unit 177 shown in FIG. 4 is a processing unit that moves the object image 310 in the superimposed image 300 displayed on the display device 161.
  • the user of the terminal device 160 first taps the button on which the characters "move an object" are described based on the input device 162. do.
  • the object image moving unit 177 enters the object moving mode.
  • the user taps the portion of the object image 310 to be transferred, which is displayed on the display device 161.
  • the user drags the tapped object image 310.
  • the tapped object image 310 can be moved to a position in the superimposed image 300 that does not overlap with the other object image 310.
  • the distance display unit 178 shown in FIG. 4 displays the actual distance corresponding to two points in the superimposed image 300.
  • the user of the terminal device 160 first taps the button on which the characters "measure the distance” are described based on the input device 162. .. As a result, the distance display unit 178 enters the distance measurement mode. Next, the user taps two places in the superimposed image 300 displayed on the display device 161. As a result, the distance display unit 178 displays the actual distance corresponding to the tapped object image 310 on the display device 161 via the display unit 174. At this time, if the user performs a pinch-out operation, the superimposed image 300 can be enlarged and displayed.
  • the object information integration unit 179 shown in FIG. 4 performs a process of integrating the corresponding object information based on the plurality of object images 310. Specifically, in the case of the present embodiment, as shown in FIG. 8, the user of the terminal device 160 first presses a button on which the characters "combine a plurality of objects" are described based on the input device 162. Tap. As a result, the object information integration unit 179 enters the object integration mode. Next, the user surrounds a plurality of object images 310 in the superimposed image 300 displayed on the display device 161 with one figure such as a rectangle by using the input device 162. As a result, the object information integration unit 179 integrates a plurality of enclosed object images 310. As a result, thereafter, the plurality of enclosed object images 310 are treated as one object image 310. That is, when moving, the object image 310 moves together with the enclosed image.
  • the object information acquisition unit 180 shown in FIG. 4 acquires newly added object information.
  • the method of acquiring the object information is not particularly limited.
  • object information such as furniture and home appliances may be acquired from the Web (Web) via the network 250.
  • the user of the terminal device 160 first presses a button on which the characters "add a new object" are described based on the input device 162. Tap.
  • the object information acquisition unit 180 enters the additional mode.
  • the display device 161 transitions to the object creation screen as shown in FIG.
  • the user can create an object having a desired shape by using the object creation screen.
  • the object information acquisition unit 180 acquires the shape created by the user as the object information. Then, the display unit 174 superimposes and displays the object image 310 corresponding to the object information acquired by the object information acquisition unit 180 on the superimposed image 300 like the highlighted object image 310 in FIG.
  • the newly added object image 310 can also be moved like the previously displayed object image 310, and is subject to distance measurement.
  • the object image display system 100 first, after installing the object in the house 200, the mobile robot 130 such as a robot vacuum cleaner is moved to obtain the first drawing information. To get. Further, the second drawing information showing the living space before the object is installed is acquired. As a result, the object image display system 100 can confirm the layout of the movable object together with the living space on the screen of the terminal device 160.
  • the object image 310 corresponding to the object can be recognized in a one-to-one correspondence with the actual object. Therefore, the user can confirm the distance between the objects, the actual distance from the object to the wall 202, and the like on the screen.
  • the layout of the object can be changed by arbitrarily moving the object image 310. Therefore, it is possible to contribute to the future execution of the remodeling plan in the house 200, for example.
  • new object information can be added and displayed as an object image 310 in the superimposed image 300. Therefore, the user can confirm in advance whether or not new furniture or the like can be installed in the residence 200.
  • each object image 310 including the new object image 310 can be easily changed. Therefore, the user can confirm in advance whether or not new furniture or the like fits in the house 200 depending on the layout to be changed.
  • the present disclosure is not limited to the above embodiment.
  • another embodiment realized by arbitrarily combining the components described in the present specification and excluding some of the components may be the embodiment of the present disclosure.
  • the present disclosure also includes modifications obtained by making various modifications that can be conceived by those skilled in the art within the scope of the gist of the present disclosure, that is, the meaning indicated by the wording described in the claims, with respect to the above-described embodiment. Is done.
  • the object image display device 120 may be configured so that the object image 310 in the superimposed image 300 can be erased.
  • the configuration in which the object image display system 100 includes the terminal device 160 has been described as an example, but the present invention is not limited to this.
  • the object image display system 100 may not include the terminal device 160, and each functional unit of the object image display device 120 may be realized by the mobile robot 130.
  • the mobile robot 130 has been described as an example of a cleaning robot, but the present invention is not limited to this.
  • the mobile robot 130 may be configured to have other functions such as a pet robot, a monitoring robot, and a transfer robot.
  • the dwelling 200 has been described as an example as a space, but the present invention is not limited to this.
  • the space may be a relatively large space such as a hotel lobby, an airport, a mass retailer, or a factory.
  • a part or all of the processing unit realized by executing the program may be provided in either the mobile robot 130 or the terminal device 160, or via a network. It may be configured to be provided on the connected server.
  • the present invention is not limited to this.
  • a plurality of mobile robots 130 may be run, and each mobile robot may create a part of the first drawing information.
  • the terminal device 160 may be configured to be able to communicate with a plurality of mobile robots 130, and the terminal device 160 may supervise the information to create the first drawing information.
  • the object image display device, the object image display system, and the object image display program of the present disclosure can be used for layout display of objects arranged in space.
  • Object image display system 120 Object image display device 130 Mobile robot 131 Body 132 Drive unit 133 Suction unit 134 Cleaning unit 135 Control unit 136 Positional relationship detection unit 138 Suction port 139 Bumper 140 Wheel 141 Housing 142 Caster 143 Dust holding unit 160 Terminal device 161 Display device 162 Input device 171 Drawing information acquisition unit 172 Positional relationship acquisition unit 173 Object separation unit 174 Display unit 175 Drawing information creation unit 176 Storage unit 177 Object image movement unit 178 Distance display unit 179 Object information integration unit 180 Object information acquisition unit 181 Input part 200 Residential 201 Floor 202 Wall 203 Protruding part 204 Door part 211 First object 212 Second object 213 Third object 214 Fourth object 215 Fifth object 250 Network 300 Superposed image 301 Spatial image 310 Object image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This object image display device has a drawing information acquisition unit that acquires first drawing information created on the basis of measurement by a mobile robot that autonomously moves in a predetermined house, and second drawing information regarding the inside of the house created before the creation of the first drawing information. The object image display device also has an object separation unit that individually separates object information indicating objects disposed in the house on the basis of a difference between the first drawing information and the second drawing information. The object image display device further has a display unit that displays a superimposed image (300) in which an object image (310) corresponding to the object information is superimposed on a space image (301) corresponding to at least one of the first drawing information and the second drawing information. Thus, it is possible to display the layout of objects in a space.

Description

物体画像表示装置、物体画像表示システム、および物体画像表示プログラムObject image display device, object image display system, and object image display program
 本開示は、所定の空間内における物体のレイアウトを表示する物体画像表示装置、物体画像表示システム、および物体画像表示プログラムに関する。 The present disclosure relates to an object image display device, an object image display system, and an object image display program that display an object layout in a predetermined space.
 従来、住居の中に配置されている家電製品、家具などに無線タグをつけ、住居内を移動するロボットに無線タグの情報を読み込ませることで、物体の配置と物体の情報とを対応付けて表示する技術が開示されている(例えば、特許文献1参照)。 Conventionally, by attaching a wireless tag to home appliances, furniture, etc. placed in a house and having a robot moving in the house read the information of the wireless tag, the arrangement of the object and the information of the object are associated with each other. A display technique is disclosed (see, for example, Patent Document 1).
 しかしながら、住居内を移動するロボットの視点と、家具などの情報だけでは、住居内に設置する新たな家具、家電を購入する際、壁との距離などを手で計測して、家具、家電のサイズが、十分、住居に収まるかを事前に確認する必要があった。つまり、家具を住居内に設置可能か否かを、購入前に判断したいとの要望がある。 However, when purchasing new furniture and home appliances to be installed in the house, the distance to the wall is measured by hand based only on the viewpoint of the robot moving in the house and the information such as furniture. It was necessary to confirm in advance whether the size was sufficient to fit in the house. In other words, there is a desire to determine whether furniture can be installed in a house before purchasing.
特開2004-345053号公報Japanese Unexamined Patent Publication No. 2004-34553
 本開示は、空間内の物体のレイアウトを表示できる物体画像表示装置、物体画像表示システム、および物体画像表示プログラムを提供する。 The present disclosure provides an object image display device, an object image display system, and an object image display program capable of displaying the layout of an object in space.
 本開示の物体画像表示装置は、所定の空間内を自律的に移動する移動ロボットによる測定に基づいて作成された第一図面情報、および第一図面情報の作成以前に作成された空間内部に関する第二図面情報を取得する図面情報取得部を有する。また、第一図面情報と第二図面情報の差分に基づいて、空間内に配置された物体を示す物体情報を個別に分離する物体分離部を有する。さらに、物体画像表示装置は、第一図面情報、および第二図面情報の少なくとも一方に対応する空間画像に物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部を有する。 The object image display device of the present disclosure relates to the first drawing information created based on the measurement by a mobile robot that autonomously moves in a predetermined space, and the space interior created before the creation of the first drawing information. (Ii) It has a drawing information acquisition unit for acquiring drawing information. Further, it has an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information. Further, the object image display device has a display unit that displays a superimposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the first drawing information and the second drawing information.
 本開示の物体画像表示システムは、所定の空間内を自律的に移動する移動ロボットと、移動ロボットと通信可能な端末装置と、移動ロボットによる測定に基づいて作成された第一図面情報、および第一図面情報の作成以前に作成された空間内部に関する第二図面情報を取得する図面情報取得部を備える。さらに、物体画像表示システムは、第一図面情報と第二図面情報の差分に基づいて、空間内に配置された物体を示す物体情報を個別に分離する物体分離部と、第一図面情報、および第二図面情報の少なくとも一方に対応する空間画像に物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部を備える。 The object image display system of the present disclosure includes a mobile robot that autonomously moves in a predetermined space, a terminal device that can communicate with the mobile robot, first drawing information created based on measurement by the mobile robot, and a first drawing information. (I) A drawing information acquisition unit for acquiring a second drawing information regarding the inside of the space created before the creation of the drawing information is provided. Further, the object image display system has an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information, the first drawing information, and the object image display system. A display unit for displaying a superimposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the second drawing information is provided.
 本開示の物体画像表示プログラムは、所定の空間内を自律的に移動する移動ロボットによる測定に基づいて作成された第一図面情報、および第一図面情報の作成以前に作成された空間内部に関する第二図面情報を取得する図面情報取得部をコンピュータに実行させる。さらに、物体画像表示プログラムは、第一図面情報と第二図面情報の差分に基づいて、空間内に配置された物体を示す物体情報を個別に分離する物体分離部と、第一図面情報および第二図面情報の少なくとも一方に対応する空間画像に物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部をコンピュータに実行させる。 The object image display program of the present disclosure relates to the first drawing information created based on the measurement by a mobile robot that autonomously moves in a predetermined space, and the first drawing information created before the creation of the first drawing information. (Ii) Have the computer execute the drawing information acquisition unit that acquires drawing information. Further, the object image display program includes an object separation unit that individually separates object information indicating an object arranged in space based on the difference between the first drawing information and the second drawing information, and the first drawing information and the first drawing information. (Ii) A computer is made to execute a display unit that displays a superposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the drawing information.
 本開示によれば、空間内に設置された物体の大きさや配置状態などを示すレイアウトを画面上に出力して表示できる物体画像表示装置、物体画像表示システム、および物体画像表示プログラムを提供できる。 According to the present disclosure, it is possible to provide an object image display device, an object image display system, and an object image display program that can output and display a layout showing the size and arrangement state of an object installed in a space on a screen.
図1は、実施の形態における物体画像表示システムを空間とともに示す図である。FIG. 1 is a diagram showing an object image display system according to an embodiment together with space. 図2は、同実施の形態における移動ロボットの外観を示す側面図である。FIG. 2 is a side view showing the appearance of the mobile robot according to the embodiment. 図3は、同実施の形態における移動ロボットの外観を示す底面図である。FIG. 3 is a bottom view showing the appearance of the mobile robot according to the embodiment. 図4は、同実施の形態における移動ロボットシステムの各機能部を示すブロック図である。FIG. 4 is a block diagram showing each functional unit of the mobile robot system according to the embodiment. 図5は、空間画像、物体画像などが表示された表示装置を示す図である。FIG. 5 is a diagram showing a display device on which a spatial image, an object image, and the like are displayed. 図6は、物体移動モード状態の表示画像を示す図である。FIG. 6 is a diagram showing a display image of the object movement mode state. 図7は、測定モード状態の表示画像を示す図である。FIG. 7 is a diagram showing a display image of the measurement mode state. 図8は、物体統合モード状態の表示画像を示す図である。FIG. 8 is a diagram showing a display image of the object integrated mode state. 図9は、追加モード状態の表示画像を示す図である。FIG. 9 is a diagram showing a display image in the additional mode state. 図10は、物体作成画面を示す図である。FIG. 10 is a diagram showing an object creation screen. 図11は、重畳画像に新しい物体画像が追加された状態を示す図である。FIG. 11 is a diagram showing a state in which a new object image is added to the superimposed image.
 以下、本開示における物体画像表示装置、物体画像表示システム、および物体画像表示プログラムの実施の形態について、図面を参照しつつ説明する。なお、以下の実施の形態は、本開示における物体画像表示装置、物体画像表示システム、および物体画像表示プログラムの一例を示したものに過ぎない。従って、本開示は、以下の実施の形態を参考に請求の範囲の文言によって範囲が画定されるものであり、以下の実施の形態のみに限定されるものではない。よって、以下の実施の形態における構成要素のうち、本開示の最上位概念を示す独立請求項に記載されていない構成要素については、本開示の課題を達成するのに必ずしも必要ではないが、より好ましい形態を構成するものとして説明される。 Hereinafter, embodiments of the object image display device, the object image display system, and the object image display program in the present disclosure will be described with reference to the drawings. The following embodiments are merely examples of the object image display device, the object image display system, and the object image display program in the present disclosure. Therefore, the present disclosure is defined by the wording of the claims with reference to the following embodiments, and is not limited to the following embodiments. Therefore, among the components in the following embodiments, the components not described in the independent claims indicating the highest level concept of the present disclosure are not necessarily necessary for achieving the subject of the present disclosure, but more. Described as constituting a preferred form.
 また、図面は、本開示を示すために、適宜、強調や省略、比率の調整を行った模式的な図であり、実際の形状や位置関係、比率とは異なる場合がある。 Further, the drawings are schematic views in which emphasis, omission, and ratio are adjusted as appropriate to show the present disclosure, and may differ from the actual shape, positional relationship, and ratio.
 (実施の形態)
 まず、本実施の形態に係る物体画像表示システム100の概略構成について、図1を参照しながら、説明する。
(Embodiment)
First, the schematic configuration of the object image display system 100 according to the present embodiment will be described with reference to FIG.
 図1は、実施の形態における物体画像表示システム100が備える移動ロボット130が移動する空間を移動ロボット130とともに示す平面図である。 FIG. 1 is a plan view showing a space in which the mobile robot 130 included in the object image display system 100 according to the embodiment moves together with the mobile robot 130.
 なお、本実施の形態の場合、移動ロボット130が移動する空間は、住居200である。そして、移動ロボット130は、住居200のフロア201上を自律的に移動する。住居200は、壁202などで囲まれた空間で構成され、一部に、突出部203、ドア部204などを有する。さらに、移動ロボット130が移動するフロア201上には、家具や家電製品などの物体(例えば、第一物体211、第二物体212、第三物体213、第四物体214および第五物体215で例示)が配置される。 In the case of this embodiment, the space in which the mobile robot 130 moves is the residence 200. Then, the mobile robot 130 autonomously moves on the floor 201 of the residence 200. The dwelling 200 is composed of a space surrounded by a wall 202 or the like, and has a protruding portion 203, a door portion 204, or the like in a part thereof. Further, on the floor 201 on which the mobile robot 130 moves, objects such as furniture and home appliances (for example, first object 211, second object 212, third object 213, fourth object 214, and fifth object 215 are exemplified. ) Is placed.
 図1に示すように、本実施の形態の物体画像表示システム100は、移動ロボット130と、端末装置160などを備える。なお、本実施の形態の場合、物体画像表示プログラムを端末装置160で実行することにより、物体画像表示装置120が実現される(図4参照)。 As shown in FIG. 1, the object image display system 100 of the present embodiment includes a mobile robot 130, a terminal device 160, and the like. In the case of the present embodiment, the object image display device 120 is realized by executing the object image display program on the terminal device 160 (see FIG. 4).
 つぎに、本実施の形態に係る移動ロボット130について、図2および図3を参照しながら、説明する。 Next, the mobile robot 130 according to the present embodiment will be described with reference to FIGS. 2 and 3.
 図2は、本実施の形態における移動ロボット130の外観を示す側面図である。図3は、本実施の形態における移動ロボット130の外観を示す底面図である。 FIG. 2 is a side view showing the appearance of the mobile robot 130 according to the present embodiment. FIG. 3 is a bottom view showing the appearance of the mobile robot 130 according to the present embodiment.
 図2および図3に示すように、本実施の形態の移動ロボット130は、フロア201上を自律的に走行しながら掃除を行う掃除ロボットである。つまり、移動ロボット130は、住居200内を自律して移動しながら、フロア201に存在する、ごみなどを吸引して掃除を行う。 As shown in FIGS. 2 and 3, the mobile robot 130 of the present embodiment is a cleaning robot that performs cleaning while autonomously traveling on the floor 201. That is, the mobile robot 130 autonomously moves in the house 200 while sucking dust and the like existing on the floor 201 for cleaning.
 具体的には、本実施の形態の移動ロボット130は、各種の構成要素が搭載されるボディ131、駆動ユニット132、掃除ユニット134、吸引ユニット133、制御ユニット135、位置関係検出部136などを備える。駆動ユニット132は、ボディ131を移動させる。掃除ユニット134は、フロア201に存在するごみなどを集める。吸引ユニット133は、ごみをボディ131の内部に吸引する。制御ユニット135は、駆動ユニット132、掃除ユニット134、吸引ユニット133などを制御する。 Specifically, the mobile robot 130 of the present embodiment includes a body 131 on which various components are mounted, a drive unit 132, a cleaning unit 134, a suction unit 133, a control unit 135, a positional relationship detection unit 136, and the like. .. The drive unit 132 moves the body 131. The cleaning unit 134 collects dust and the like existing on the floor 201. The suction unit 133 sucks dust into the body 131. The control unit 135 controls the drive unit 132, the cleaning unit 134, the suction unit 133, and the like.
 ボディ131は、駆動ユニット132、制御ユニット135などを収容する筐体を構成する。ボディ131は、下部に対して、ボディ131の上部が取り外し可能に構成される。また、ボディ131は、外周部に取り付けられる、ボディ131に対して変位可能なバンパ139を備える。さらに、図3に示すように、ボディ131は、下部に形成される、ごみをボディ131の内部に吸引するための吸込口138を有する。 The body 131 constitutes a housing for accommodating the drive unit 132, the control unit 135, and the like. The upper part of the body 131 is removable with respect to the lower part of the body 131. Further, the body 131 includes a bumper 139 that is attached to the outer peripheral portion and is displaceable with respect to the body 131. Further, as shown in FIG. 3, the body 131 has a suction port 138 formed at the lower portion for sucking dust into the body 131.
 駆動ユニット132は、制御ユニット135からの指示に基づいて、移動ロボット130を走行させる装置である。駆動ユニット132は、フロア201の掃除面上を走行するホイール140、ホイール140にトルクを与える走行用モータ(図示せず)、および、走行用モータを収容するハウジング141などを備える。 The drive unit 132 is a device for running the mobile robot 130 based on an instruction from the control unit 135. The drive unit 132 includes a wheel 140 traveling on the cleaning surface of the floor 201, a traveling motor (not shown) for applying torque to the wheel 140, a housing 141 for accommodating the traveling motor, and the like.
 また、ボディ131は、底面に設けられる、補助輪として機能する、キャスター142を有する。そして、移動ロボット130は、2つのホイール140の回転を独立して制御する。これにより、移動ロボット130は、直進、後退、左回転、右回転などの自在な走行が可能となる。 Further, the body 131 has casters 142 provided on the bottom surface and functioning as training wheels. Then, the mobile robot 130 independently controls the rotations of the two wheels 140. As a result, the mobile robot 130 can freely travel in a straight line, backward, left turn, right turn, and the like.
 掃除ユニット134は、フロア201上の塵埃を掃き集めて吸込口138からごみを吸い込ませるためのユニットを構成する。掃除ユニット134は、吸込口138の近傍に配置される回転ブラシ、回転ブラシを回転させるブラシ駆動モータなどを有する。 The cleaning unit 134 constitutes a unit for sweeping dust on the floor 201 and sucking dust from the suction port 138. The cleaning unit 134 includes a rotating brush arranged in the vicinity of the suction port 138, a brush drive motor for rotating the rotating brush, and the like.
 吸引ユニット133は、吸込口138から塵埃を吸い込み、ボディ131の内部に吸い込んだ塵埃を保持するユニットを構成する。吸引ユニット133は、電動ファン(図示せず)、および塵埃保持部143などを有する。電動ファンは、塵埃保持部143の内部の空気を吸引し、ボディ131の外方に空気を吐出させる。これにより、電動ファンは、吸込口138から、ごみを吸い込み、吸い込んだごみを塵埃保持部143内に集塵する。 The suction unit 133 constitutes a unit that sucks dust from the suction port 138 and holds the sucked dust inside the body 131. The suction unit 133 includes an electric fan (not shown), a dust holding portion 143, and the like. The electric fan sucks the air inside the dust holding portion 143 and discharges the air to the outside of the body 131. As a result, the electric fan sucks dust from the suction port 138 and collects the sucked dust into the dust holding portion 143.
 また、位置関係検出部136は、フロア201内において、移動ロボット130と、壁202および物体などとの位置関係を検出する装置である。位置関係検出部136は、ボディ131の周囲に存在する、壁202および家具などの物体の方向、距離などを検出し、2.5次元的な情報を取得する。なお、本実施の形態の移動ロボット130は、位置関係検出部136が検出した方向と距離との情報から、自己位置を把握することも可能に構成される。また、位置関係検出部136の種類は、特に限定されないが、例えば、光を放射し、障害物により反射して帰ってきた光に基づいて、位置と距離とを検出するLiDAR(Light Detection and Ranging)、ToF(Time of Flight)カメラなどを例示できる。なお、位置関係検出部136としては、障害物が反射した照明光や自然光を像として取得し、視差に基づいて、位置と距離とを取得する複眼カメラなどを例示することができる。本実施の形態の場合、移動ロボット130は、位置関係検出部136として、ボディ131の上部にLiDARを、側部にカメラを備える。 Further, the positional relationship detection unit 136 is a device that detects the positional relationship between the mobile robot 130, the wall 202, an object, and the like on the floor 201. The positional relationship detection unit 136 detects the direction, distance, and the like of objects such as walls 202 and furniture existing around the body 131, and acquires 2.5-dimensional information. The mobile robot 130 of the present embodiment can also grasp its own position from the information of the direction and the distance detected by the positional relationship detection unit 136. The type of the positional relationship detection unit 136 is not particularly limited, but for example, LiDAR (Light Detection and Ranking) that detects the position and the distance based on the light that radiates light and is reflected by an obstacle and returned. ), ToF (Time of Light) camera and the like can be exemplified. As the positional relationship detection unit 136, a compound eye camera or the like that acquires illumination light or natural light reflected by an obstacle as an image and acquires a position and a distance based on parallax can be exemplified. In the case of the present embodiment, the mobile robot 130 is provided with a LiDAR on the upper portion of the body 131 and a camera on the side portion as the positional relationship detection unit 136.
 なお、移動ロボット130は、上記位置関係検出部136以外に、別のセンサを備えてもよい。例えば、ボディ131の底面の複数箇所に配置され、フロア201が存在するか否かを検出する床面センサを備えてもよい。また、駆動ユニット132に備えられ、走行用モータによって回転する一対のホイール140のそれぞれの回転角を検出するエンコーダを備えてもよい。また、移動ロボット130が走行する際の加速度を検出する加速度センサ、移動ロボット130が旋回する際の角速度を検出する角速度センサを備えてもよい。床面に堆積している塵埃の量を測定する塵埃量センサを備えてもよい。さらに、バンパ139の変位を検出して、障害物が衝突したことを検出する接触センサを備えてもよい。また、ボディ131の前方に存在する障害物を検出する位置関係検出部136以外の、超音波センサなどの障害物センサを備えてもよい。 The mobile robot 130 may be provided with another sensor in addition to the positional relationship detection unit 136. For example, a floor surface sensor that is arranged at a plurality of locations on the bottom surface of the body 131 and detects whether or not the floor 201 is present may be provided. Further, the drive unit 132 may be provided with an encoder for detecting the respective rotation angles of the pair of wheels 140 rotated by the traveling motor. Further, an acceleration sensor that detects the acceleration when the mobile robot 130 travels and an angular velocity sensor that detects the angular velocity when the mobile robot 130 turns may be provided. A dust amount sensor that measures the amount of dust accumulated on the floor surface may be provided. Further, a contact sensor may be provided which detects the displacement of the bumper 139 and detects that an obstacle has collided. Further, an obstacle sensor such as an ultrasonic sensor may be provided other than the positional relationship detection unit 136 that detects an obstacle existing in front of the body 131.
 端末装置160(図1参照)は、プログラムの実行により、種々の機能を実現できるコンピュータなどで構成される。本実施の形態の場合、端末装置160は、例えば、スマートフォンやタブレット端末と称される装置である。これにより、端末装置160は、移動ロボット130との間で、通信により、各種情報の授受が可能となる。本実施の形態の場合、端末装置160は、表示装置161(図5参照)と、入力装置162(図4参照)とを備える。入力装置162は、表示装置161の表面に配置(表示)され、手指などの接触を二次元的に検知するタッチセンサなどで構成される。 The terminal device 160 (see FIG. 1) is composed of a computer or the like that can realize various functions by executing a program. In the case of the present embodiment, the terminal device 160 is, for example, a device called a smartphone or a tablet terminal. As a result, the terminal device 160 can exchange various information with the mobile robot 130 by communication. In the case of the present embodiment, the terminal device 160 includes a display device 161 (see FIG. 5) and an input device 162 (see FIG. 4). The input device 162 is arranged (displayed) on the surface of the display device 161 and is composed of a touch sensor or the like that two-dimensionally detects contact with fingers or the like.
 つぎに、本実施の形態に係る物体画像表示システム100の各機能部について、図4を参照しながら、説明する。 Next, each functional unit of the object image display system 100 according to the present embodiment will be described with reference to FIG.
 図4は、本実施の形態における物体画像表示システム100の各機能部を示すブロック図である。 FIG. 4 is a block diagram showing each functional unit of the object image display system 100 according to the present embodiment.
 図4に示すように、本実施の形態の物体画像表示システム100は、物体画像表示装置120の機能部として、図面情報取得部171と、物体分離部173と、表示部174などを備える。本実施の形態の場合、物体画像表示システム100は、移動ロボット130の機能部として、位置関係取得部172と、図面情報作成部175を備える。さらに、物体画像表示システム100は、物体画像表示装置120の機能部として、物体画像移動部177と、距離表示部178と、物体情報統合部179と、物体情報取得部180と、入力部181を備える。 As shown in FIG. 4, the object image display system 100 of the present embodiment includes a drawing information acquisition unit 171, an object separation unit 173, a display unit 174, and the like as functional units of the object image display device 120. In the case of the present embodiment, the object image display system 100 includes a positional relationship acquisition unit 172 and a drawing information creation unit 175 as functional units of the mobile robot 130. Further, the object image display system 100 includes an object image moving unit 177, a distance display unit 178, an object information integration unit 179, an object information acquisition unit 180, and an input unit 181 as functional units of the object image display device 120. Be prepared.
 移動ロボット130の位置関係取得部172は、フロア201内における移動ロボット130と家具などの物体や壁などとの位置関係を、位置関係検出部136から取得する。本実施の形態の場合、位置関係取得部172は、位置関係検出部136であるLiDAR、およびカメラの少なくとも一方の測定データから、移動ロボット130の自己位置と物体などとの位置関係を演算により取得する。 The positional relationship acquisition unit 172 of the mobile robot 130 acquires the positional relationship between the mobile robot 130 and an object such as furniture or a wall on the floor 201 from the positional relationship detection unit 136. In the case of the present embodiment, the positional relationship acquisition unit 172 acquires the positional relationship between the self-position of the mobile robot 130 and an object or the like by calculation from the measurement data of at least one of the LiDAR, which is the positional relationship detection unit 136, and the camera. do.
 移動ロボット130の図面情報作成部175は、位置関係取得部172が取得した移動ロボット130と、物体や壁との位置関係に基づいて、図面情報を作成する。なお、図面情報の作成方法は、特に限定されないが、本実施の形態の場合、SLAM(Simultaneous Localization and Mapping)技術により図面情報が作成される。作成された図面情報は、物体画像表示装置120の記憶部176に保持される。 The drawing information creation unit 175 of the mobile robot 130 creates drawing information based on the positional relationship between the mobile robot 130 acquired by the positional relationship acquisition unit 172 and an object or a wall. The method of creating the drawing information is not particularly limited, but in the case of the present embodiment, the drawing information is created by the SLAM (Simultaneus Localization and Mapping) technique. The created drawing information is stored in the storage unit 176 of the object image display device 120.
 物体画像表示装置120の図面情報取得部171は、所定の空間の形状などを示す情報である図面情報を取得する。図面情報取得部171は、少なくとも2種類の図面情報を取得する。図面情報の1つである第一図面情報は、住居200内を自律的に移動する移動ロボット130による測定に基づいて作成された図面情報である。第二図面情報は、第一図面情報の作成以前に作成された住居200に関する図面情報である。 The drawing information acquisition unit 171 of the object image display device 120 acquires drawing information which is information indicating the shape of a predetermined space or the like. The drawing information acquisition unit 171 acquires at least two types of drawing information. The first drawing information, which is one of the drawing information, is drawing information created based on the measurement by the mobile robot 130 that autonomously moves in the house 200. The second drawing information is drawing information regarding the residence 200 created before the creation of the first drawing information.
 なお、第一図面情報の作成箇所は、特に、限定されない。例えば、移動ロボット130の位置関係検出部136が検出したデータを、ネットワーク250を介してサーバが入手し、サーバなどで第一図面情報を作成してもよい。本実施の形態の場合、第一図面情報は、移動ロボット130の位置関係検出部136が検出したデータに基づいて、移動ロボット130の図面情報作成部175において作成される。そして、物体画像表示装置120の図面情報取得部171は、移動ロボット130から第一図面情報を取得する。 The location where the first drawing information is created is not particularly limited. For example, the data detected by the positional relationship detection unit 136 of the mobile robot 130 may be obtained by the server via the network 250, and the first drawing information may be created by the server or the like. In the case of the present embodiment, the first drawing information is created by the drawing information creation unit 175 of the mobile robot 130 based on the data detected by the positional relationship detection unit 136 of the mobile robot 130. Then, the drawing information acquisition unit 171 of the object image display device 120 acquires the first drawing information from the mobile robot 130.
 なお、第二図面情報の作成箇所も、特に、限定されない。例えば、第二図面情報は、住居200の設計時に作成されたデータに基づいて、サーバなどで作成してもよい。 The location where the second drawing information is created is not particularly limited. For example, the second drawing information may be created by a server or the like based on the data created at the time of designing the residence 200.
 また、家具などが設置されていない住居200のフロア201に移動ロボット130を走行させ、位置関係検出部136が検出したデータに基づいて、第二図面情報を作成してもよい。このとき、第二図面情報は、家具などの物体が設置されていない住居200の壁の情報により構成されることが好ましい。本実施の形態の場合、図面情報取得部171は、ネットワーク250を介して、いわゆる間取り図を、第二図面データとして取得する。 Further, the mobile robot 130 may be run on the floor 201 of the residence 200 in which furniture or the like is not installed, and the second drawing information may be created based on the data detected by the positional relationship detection unit 136. At this time, the second drawing information is preferably composed of information on the wall of the house 200 in which an object such as furniture is not installed. In the case of the present embodiment, the drawing information acquisition unit 171 acquires a so-called floor plan as the second drawing data via the network 250.
 物体分離部173は、図面情報取得部171が取得した第一図面情報と第二図面情報との差分に基づいて、住居200内に配置された、例えば、第一物体211から第五物体215(図1参照)をそれぞれ示す物体情報を、個別に分離する。なお、物体情報を個別に分離する方法は、特に、限定されない。例えば、図1に示す楕円形のソファーである第三物体213の場合、まず、移動ロボット130が、第三物体213の周囲を走行する。このとき、独立した1個体と認識できる情報であって、第二図面情報に、該当する情報がない情報は、物体情報の1つとして、第一図面情報から分離する。 The object separation unit 173 is arranged in the residence 200 based on the difference between the first drawing information and the second drawing information acquired by the drawing information acquisition unit 171, for example, the first object 211 to the fifth object 215 ( The object information indicating each of them (see FIG. 1) is separated individually. The method of separating the object information individually is not particularly limited. For example, in the case of the third object 213, which is the elliptical sofa shown in FIG. 1, the mobile robot 130 first travels around the third object 213. At this time, information that can be recognized as an independent individual and that does not correspond to the second drawing information is separated from the first drawing information as one of the object information.
 また、図1に示すテーブルである第四物体214の場合、移動ロボット130が第四物体214に潜り込む。このとき、第四物体214の4本の脚部の測定データに基づいて、それぞれの脚部が、1個体と認識される場合もある。 Further, in the case of the fourth object 214, which is the table shown in FIG. 1, the mobile robot 130 sneaks into the fourth object 214. At this time, each leg may be recognized as one individual based on the measurement data of the four legs of the fourth object 214.
 さらに、壁202や他の物体に密着状態で配置され、移動ロボット130の移動経路からは、物体全周の測定データが得られない場合などがある。この場合、まず、第一図面情報から、第二図面情報に対応する情報を除去して、残存した情報から角部などを抽出する。そして、物体分離部173は、抽出した角部を矩形などに外挿して物体情報を生成する。 Further, it may be arranged in close contact with the wall 202 or another object, and the measurement data of the entire circumference of the object may not be obtained from the movement path of the mobile robot 130. In this case, first, the information corresponding to the second drawing information is removed from the first drawing information, and the corners and the like are extracted from the remaining information. Then, the object separation unit 173 extrapolates the extracted corners to a rectangle or the like to generate object information.
 以下、物体画像表示装置120の表示装置161について、図5を参照しながら、説明する。 Hereinafter, the display device 161 of the object image display device 120 will be described with reference to FIG.
 図5は、空間画像、物体画像などが表示される表示装置161を示す図である。 FIG. 5 is a diagram showing a display device 161 on which a spatial image, an object image, and the like are displayed.
 つまり、図4に示す物体画像表示装置120の表示部174は、図面情報取得部171が取得した第一図面情報、および第二図面情報の少なくとも一方に対応する空間画像301に物体分離部173が分離した物体情報に対応する物体画像310を重畳させた重畳画像300を、表示装置161に表示する。 That is, in the display unit 174 of the object image display device 120 shown in FIG. 4, the object separation unit 173 is included in the spatial image 301 corresponding to at least one of the first drawing information acquired by the drawing information acquisition unit 171 and the second drawing information. The superimposed image 300 on which the object image 310 corresponding to the separated object information is superimposed is displayed on the display device 161.
 また、物体画像表示装置120の入力部181は、端末装置160が備える入力装置162から入力される信号を処理する。本実施の形態の場合、入力装置162には、上述したように、タッチパネルが含まれる。そのため、具体的には、入力部181は、タッチパネル上の、「タップ」、「ドラッグ」および「ピンチアウト」などのユーザの操作を受け付ける。 Further, the input unit 181 of the object image display device 120 processes the signal input from the input device 162 included in the terminal device 160. In the case of the present embodiment, the input device 162 includes a touch panel as described above. Therefore, specifically, the input unit 181 accepts user operations such as "tap", "drag", and "pinch out" on the touch panel.
 また、図4に示す物体画像移動部177は、表示装置161に表示される重畳画像300において、物体画像310を移動させる処理部である。具体的には、本実施の形態の場合、図6に示すように、まず、端末装置160のユーザが、入力装置162に基づいて、「物体を移動させる」の文字が記載されたボタンをタップする。これにより、物体画像移動部177は、物体移動モードになる。つぎに、ユーザが、表示装置161に表示される、異動対象となる物体画像310の部分をタップする。そして、ユーザは、タップされた物体画像310をドラッグ操作する。これにより、タップされた物体画像310を、重畳画像300内の、他の物体画像310と重ならない位置に移動させることができる。 Further, the object image moving unit 177 shown in FIG. 4 is a processing unit that moves the object image 310 in the superimposed image 300 displayed on the display device 161. Specifically, in the case of the present embodiment, as shown in FIG. 6, the user of the terminal device 160 first taps the button on which the characters "move an object" are described based on the input device 162. do. As a result, the object image moving unit 177 enters the object moving mode. Next, the user taps the portion of the object image 310 to be transferred, which is displayed on the display device 161. Then, the user drags the tapped object image 310. As a result, the tapped object image 310 can be moved to a position in the superimposed image 300 that does not overlap with the other object image 310.
 また、図4に示す距離表示部178は、重畳画像300内の2点に対応する実際の距離を表示する。具体的には、本実施の形態の場合、図7に示すように、まず、端末装置160のユーザが、入力装置162に基づいて、「距離を測る」の文字が記載されたボタンをタップする。これにより、距離表示部178は、距離測定モードになる。つぎに、ユーザが、表示装置161に表示される、重畳画像300内での2箇所をタップする。これにより、距離表示部178は、タップされた物体画像310に対応する実際の距離を、表示部174を介して、表示装置161に表示する。このとき、ユーザが、ピンチアウト操作すれば、重畳画像300を拡大して表示させることもできる。 Further, the distance display unit 178 shown in FIG. 4 displays the actual distance corresponding to two points in the superimposed image 300. Specifically, in the case of the present embodiment, as shown in FIG. 7, the user of the terminal device 160 first taps the button on which the characters "measure the distance" are described based on the input device 162. .. As a result, the distance display unit 178 enters the distance measurement mode. Next, the user taps two places in the superimposed image 300 displayed on the display device 161. As a result, the distance display unit 178 displays the actual distance corresponding to the tapped object image 310 on the display device 161 via the display unit 174. At this time, if the user performs a pinch-out operation, the superimposed image 300 can be enlarged and displayed.
 また、図4に示す物体情報統合部179は、複数の物体画像310に基づいて、対応する物体情報を統合する処理を行う。具体的には、本実施の形態の場合、図8に示すように、まず、端末装置160のユーザが、入力装置162に基づいて、「複数の物体をまとめる」の文字が記載されたボタンをタップする。これにより、物体情報統合部179は、物体統合モードになる。つぎに、ユーザが、表示装置161に表示される、重畳画像300内の複数の物体画像310を、入力装置162を用いて、矩形などの1つの図形で囲む。これにより、物体情報統合部179は、囲まれた複数の物体画像310を統合する。その結果、以降、囲まれた複数の物体画像310は、以降、1つの物体画像310として取り扱われる。つまり、移動の際においては、囲まれた画像とともに、物体画像310が一体に移動する。 Further, the object information integration unit 179 shown in FIG. 4 performs a process of integrating the corresponding object information based on the plurality of object images 310. Specifically, in the case of the present embodiment, as shown in FIG. 8, the user of the terminal device 160 first presses a button on which the characters "combine a plurality of objects" are described based on the input device 162. Tap. As a result, the object information integration unit 179 enters the object integration mode. Next, the user surrounds a plurality of object images 310 in the superimposed image 300 displayed on the display device 161 with one figure such as a rectangle by using the input device 162. As a result, the object information integration unit 179 integrates a plurality of enclosed object images 310. As a result, thereafter, the plurality of enclosed object images 310 are treated as one object image 310. That is, when moving, the object image 310 moves together with the enclosed image.
 また、図4に示す物体情報取得部180は、新しく追加する物体情報を取得する。なお、物体情報の取得方法は、特に、限定されない。例えば、家具や家電製品などの物体情報を、ウエブ(Web)上からネットワーク250を介して取得してもよい。具体的には、本実施の形態の場合、図9に示すように、まず、端末装置160のユーザが、入力装置162に基づいて、「新しい物体を追加する」の文字が記載されたボタンをタップする。これにより、物体情報取得部180は、追加モードになる。そして、表示装置161は、図10に示すような物体作成画面に遷移する。これにより、ユーザは、物体作成画面を用いて、所望の形状の物体を作成することができる。このとき、ユーザが物体作成画面内の「次へ」の文字が記載されたボタンをタップすると、物体情報取得部180は、ユーザが作成した形状を、物体情報として取得する。そして、表示部174は、物体情報取得部180が取得した物体情報に対応する物体画像310を、図11の強調表示した物体画像310のように、重畳画像300に重畳させて表示する。 Further, the object information acquisition unit 180 shown in FIG. 4 acquires newly added object information. The method of acquiring the object information is not particularly limited. For example, object information such as furniture and home appliances may be acquired from the Web (Web) via the network 250. Specifically, in the case of the present embodiment, as shown in FIG. 9, the user of the terminal device 160 first presses a button on which the characters "add a new object" are described based on the input device 162. Tap. As a result, the object information acquisition unit 180 enters the additional mode. Then, the display device 161 transitions to the object creation screen as shown in FIG. As a result, the user can create an object having a desired shape by using the object creation screen. At this time, when the user taps the button on the object creation screen on which the characters "Next" are written, the object information acquisition unit 180 acquires the shape created by the user as the object information. Then, the display unit 174 superimposes and displays the object image 310 corresponding to the object information acquired by the object information acquisition unit 180 on the superimposed image 300 like the highlighted object image 310 in FIG.
 なお、新しく追加された、上記物体画像310も、先に表示されている物体画像310と同様に、移動が可能で、距離測定の対象となる。 The newly added object image 310 can also be moved like the previously displayed object image 310, and is subject to distance measurement.
 つまり、上述したように、本実施の形態に係る物体画像表示システム100によれば、まず、住居200内に物体を設置後、ロボット掃除機などの移動ロボット130を移動させて、第一図面情報を取得する。さらに、物体を設置する前の居住空間を示す第二図面情報を取得する。これにより、物体画像表示システム100は、端末装置160の画面上で、移動可能な物体のレイアウトを、居住空間とともに確認できる。 That is, as described above, according to the object image display system 100 according to the present embodiment, first, after installing the object in the house 200, the mobile robot 130 such as a robot vacuum cleaner is moved to obtain the first drawing information. To get. Further, the second drawing information showing the living space before the object is installed is acquired. As a result, the object image display system 100 can confirm the layout of the movable object together with the living space on the screen of the terminal device 160.
 また、物体に対応した物体画像310により、実際の物体と、1対1に、対応付けて認識することができる。そのため、ユーザは、物体間の距離や、物体から壁202までの実際の距離などを、画面上で確認することが可能となる。 Further, the object image 310 corresponding to the object can be recognized in a one-to-one correspondence with the actual object. Therefore, the user can confirm the distance between the objects, the actual distance from the object to the wall 202, and the like on the screen.
 また、物体画像310を任意に移動させて、物体のレイアウトを変更することができる。そのため、ユーザが将来実行する、例えば、住居200内の模様替えの計画に寄与することが可能となる。 Further, the layout of the object can be changed by arbitrarily moving the object image 310. Therefore, it is possible to contribute to the future execution of the remodeling plan in the house 200, for example.
 また、新しい物体情報を追加して、物体画像310として、重畳画像300内に表示することができる。そのため、ユーザは、新しい家具などが住居200に設置可能か否かを、事前に、確認できる。 Further, new object information can be added and displayed as an object image 310 in the superimposed image 300. Therefore, the user can confirm in advance whether or not new furniture or the like can be installed in the residence 200.
 また、新しい物体画像310も含めて物体画像310のそれぞれの位置を、容易に変更できる。そのため、ユーザは、変更するレイアウト次第で、新しい家具などが、住居200内に収まるか否かを、事前に、確認できる。 In addition, the position of each object image 310 including the new object image 310 can be easily changed. Therefore, the user can confirm in advance whether or not new furniture or the like fits in the house 200 depending on the layout to be changed.
 なお、本開示は、上記実施の形態に限定されるものではない。例えば、本明細書において記載した構成要素を任意に組み合わせて、また、構成要素のいくつかを除外して実現される別の実施の形態を本開示の実施の形態としてもよい。また、上記実施の形態に対して本開示の主旨、すなわち、請求の範囲に記載される文言が示す意味を逸脱しない範囲で当業者が思いつく各種変形を施して得られる変形例も本開示に含まれる。 Note that the present disclosure is not limited to the above embodiment. For example, another embodiment realized by arbitrarily combining the components described in the present specification and excluding some of the components may be the embodiment of the present disclosure. The present disclosure also includes modifications obtained by making various modifications that can be conceived by those skilled in the art within the scope of the gist of the present disclosure, that is, the meaning indicated by the wording described in the claims, with respect to the above-described embodiment. Is done.
 例えば、物体画像表示装置120は、重畳画像300内の物体画像310の消去が可能な構成としてもよい。 For example, the object image display device 120 may be configured so that the object image 310 in the superimposed image 300 can be erased.
 また、上記実施の形態では、物体画像表示システム100が端末装置160を備える構成を例に説明したが、これに限られない。例えば、物体画像表示システム100は、端末装置160を備えず、物体画像表示装置120の各機能部が、移動ロボット130により実現される構成としてもよい。 Further, in the above embodiment, the configuration in which the object image display system 100 includes the terminal device 160 has been described as an example, but the present invention is not limited to this. For example, the object image display system 100 may not include the terminal device 160, and each functional unit of the object image display device 120 may be realized by the mobile robot 130.
 また、上記実施の形態では、移動ロボット130が、掃除ロボットである例で説明したが、これに限られない。例えば、移動ロボット130は、愛玩ロボット、監視ロボット、搬送ロボットなど、他の機能を備える構成としてもよい。 Further, in the above embodiment, the mobile robot 130 has been described as an example of a cleaning robot, but the present invention is not limited to this. For example, the mobile robot 130 may be configured to have other functions such as a pet robot, a monitoring robot, and a transfer robot.
 また、上記実施の形態では、空間として、住居200を例に説明したが、これに限られない。例えば、空間は、ホテルのロビー、空港、量販店、工場など、比較的大きな空間でもよい。 Further, in the above embodiment, the dwelling 200 has been described as an example as a space, but the present invention is not limited to this. For example, the space may be a relatively large space such as a hotel lobby, an airport, a mass retailer, or a factory.
 また、上記実施の形態においては、プログラムの実行により実現される処理部の一部、または全部が、移動ロボット130、および端末装置160のいずれかに備えられる構成としてもよく、また、ネットワークを介して、接続されるサーバ上に備えられる構成としてもよい。 Further, in the above embodiment, a part or all of the processing unit realized by executing the program may be provided in either the mobile robot 130 or the terminal device 160, or via a network. It may be configured to be provided on the connected server.
 また、上記実施の形態では、1台の移動ロボット130を用いて、第一図面情報を作成する場合を例に説明したが、これに限られない。例えば、複数台の移動ロボット130を走行させて、第一図面情報の一部を、それぞれの移動ロボットが作成してもよい。この場合、端末装置160を、複数台の移動ロボット130と通信可能に構成し、端末装置160が情報を統括して、第一図面情報を作成してもよい。 Further, in the above embodiment, the case where the first drawing information is created by using one mobile robot 130 has been described as an example, but the present invention is not limited to this. For example, a plurality of mobile robots 130 may be run, and each mobile robot may create a part of the first drawing information. In this case, the terminal device 160 may be configured to be able to communicate with a plurality of mobile robots 130, and the terminal device 160 may supervise the information to create the first drawing information.
 本開示の物体画像表示装置、物体画像表示システム、および物体画像表示プログラムは、空間内に配置される物体のレイアウト表示に利用可能である。 The object image display device, the object image display system, and the object image display program of the present disclosure can be used for layout display of objects arranged in space.
 100  物体画像表示システム
 120  物体画像表示装置
 130  移動ロボット
 131  ボディ
 132  駆動ユニット
 133  吸引ユニット
 134  掃除ユニット
 135  制御ユニット
 136  位置関係検出部
 138  吸込口
 139  バンパ
 140  ホイール
 141  ハウジング
 142  キャスター
 143  塵埃保持部
 160  端末装置
 161  表示装置
 162  入力装置
 171  図面情報取得部
 172  位置関係取得部
 173  物体分離部
 174  表示部
 175  図面情報作成部
 176  記憶部
 177  物体画像移動部
 178  距離表示部
 179  物体情報統合部
 180  物体情報取得部
 181  入力部
 200  住居
 201  フロア
 202  壁
 203  突出部
 204  ドア部
 211  第一物体
 212  第二物体
 213  第三物体
 214  第四物体
 215  第五物体
 250  ネットワーク
 300  重畳画像
 301  空間画像
 310  物体画像
100 Object image display system 120 Object image display device 130 Mobile robot 131 Body 132 Drive unit 133 Suction unit 134 Cleaning unit 135 Control unit 136 Positional relationship detection unit 138 Suction port 139 Bumper 140 Wheel 141 Housing 142 Caster 143 Dust holding unit 160 Terminal device 161 Display device 162 Input device 171 Drawing information acquisition unit 172 Positional relationship acquisition unit 173 Object separation unit 174 Display unit 175 Drawing information creation unit 176 Storage unit 177 Object image movement unit 178 Distance display unit 179 Object information integration unit 180 Object information acquisition unit 181 Input part 200 Residential 201 Floor 202 Wall 203 Protruding part 204 Door part 211 First object 212 Second object 213 Third object 214 Fourth object 215 Fifth object 250 Network 300 Superposed image 301 Spatial image 310 Object image

Claims (7)

  1. 所定の空間内を自律的に移動する移動ロボットによる測定に基づいて作成される第一図面情報、および前記第一図面情報の作成以前に作成された前記空間内部に関する第二図面情報を取得する図面情報取得部と、
    前記第一図面情報と前記第二図面情報の差分に基づいて、前記空間内に配置された物体を示す物体情報を個別に分離する物体分離部と、
    前記第一図面情報、および前記第二図面情報の少なくとも一方に対応する空間画像に前記物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部と、を有する、
    物体画像表示装置。
    A drawing for acquiring first drawing information created based on measurement by a mobile robot that autonomously moves in a predetermined space, and second drawing information about the inside of the space created before the creation of the first drawing information. Information acquisition department and
    An object separation unit that individually separates object information indicating an object arranged in the space based on the difference between the first drawing information and the second drawing information.
    It has a display unit that displays a superposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the first drawing information and the second drawing information.
    Object image display device.
  2. 前記重畳画像において、前記物体画像を移動させる物体画像移動部を備える、
    請求項1に記載の物体画像表示装置。
    In the superimposed image, an object image moving unit for moving the object image is provided.
    The object image display device according to claim 1.
  3. 前記重畳画像内の2点に対応する実際の距離を表示する距離表示部を備える、
    請求項1または請求項2のいずれか1項に記載の物体画像表示装置。
    A distance display unit that displays an actual distance corresponding to two points in the superimposed image is provided.
    The object image display device according to any one of claims 1 and 2.
  4. 複数の前記物体画像に基づいて、対応する前記物体情報を統合する物体情報統合部を備える、
    請求項1から請求項3のいずれか1項に記載の物体画像表示装置。
    An object information integration unit that integrates corresponding object information based on a plurality of the object images is provided.
    The object image display device according to any one of claims 1 to 3.
  5. 前記物体情報を取得する物体情報取得部を備え、
    前記表示部は、前記物体情報取得部が取得した前記物体情報に対応する物体画像を前記重畳画像に重畳させて表示する、
    請求項1から請求項4のいずれか1項に記載の物体画像表示装置。
    It is provided with an object information acquisition unit that acquires the object information.
    The display unit superimposes and displays an object image corresponding to the object information acquired by the object information acquisition unit on the superimposed image.
    The object image display device according to any one of claims 1 to 4.
  6. 所定の空間内を自律的に移動する移動ロボットと、
    前記移動ロボットと通信可能な端末装置と、
    前記移動ロボットによる測定に基づいて作成される第一図面情報、および前記第一図面情報の作成以前に作成された前記空間内部に関する第二図面情報を取得する図面情報取得部と、
    前記第一図面情報と前記第二図面情報の差分に基づいて、前記空間内に配置された物体を示す物体情報を個別に分離する物体分離部と、
    前記第一図面情報、および前記第二図面情報の少なくとも一方に対応する空間画像に前記物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部と、を備える、
    物体画像表示システム。
    A mobile robot that autonomously moves in a predetermined space,
    A terminal device capable of communicating with the mobile robot and
    A drawing information acquisition unit that acquires the first drawing information created based on the measurement by the mobile robot and the second drawing information regarding the inside of the space created before the creation of the first drawing information.
    An object separation unit that individually separates object information indicating an object arranged in the space based on the difference between the first drawing information and the second drawing information.
    A display unit for displaying a superimposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the first drawing information and the second drawing information is provided.
    Object image display system.
  7. 所定の空間内を自律的に移動する移動ロボットによる測定に基づいて作成される第一図面情報、および前記第一図面情報の作成以前に作成された前記空間内部に関する第二図面情報を取得する図面情報取得部と、
    前記第一図面情報と前記第二図面情報の差分に基づいて、前記空間内に配置された物体を示す物体情報を個別に分離する物体分離部と、
    前記第一図面情報、および前記第二図面情報の少なくとも一方に対応する空間画像に前記物体情報に対応する物体画像を重畳させた重畳画像を表示する表示部と、をコンピュータに実行させることにより実現する、
    物体画像表示プログラム。
    A drawing for acquiring first drawing information created based on measurement by a mobile robot that autonomously moves in a predetermined space, and second drawing information about the inside of the space created before the creation of the first drawing information. Information acquisition department and
    An object separation unit that individually separates object information indicating an object arranged in the space based on the difference between the first drawing information and the second drawing information.
    Realized by having a computer execute a display unit that displays a superposed image in which an object image corresponding to the object information is superimposed on a spatial image corresponding to at least one of the first drawing information and the second drawing information. do,
    Object image display program.
PCT/JP2021/001559 2020-03-06 2021-01-19 Object image display device, object image display system, and object image display program WO2021176863A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020039109A JP7281716B2 (en) 2020-03-06 2020-03-06 Object image display device, object image display system, and object image display program
JP2020-039109 2020-03-06

Publications (1)

Publication Number Publication Date
WO2021176863A1 true WO2021176863A1 (en) 2021-09-10

Family

ID=77614496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/001559 WO2021176863A1 (en) 2020-03-06 2021-01-19 Object image display device, object image display system, and object image display program

Country Status (2)

Country Link
JP (1) JP7281716B2 (en)
WO (1) WO2021176863A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003337900A (en) * 2002-05-20 2003-11-28 Nippon Telegr & Teleph Corp <Ntt> METHOD AND SYSTEM FOR SELLING PRODUCT Web-LINKED WITH THREE-DIMENSIONAL FLOOR PLAN OF HOUSE
JP2004033340A (en) * 2002-07-01 2004-02-05 Hitachi Home & Life Solutions Inc Robot vacuum cleaner and robot vacuum cleaner control program
JP2009169845A (en) * 2008-01-18 2009-07-30 Toyota Motor Corp Autonomous mobile robot and map update method
JP2014071847A (en) * 2012-10-01 2014-04-21 Sharp Corp Self-propelled electronic apparatus, electronic apparatus control system, and electronic apparatus control method
JP2019528487A (en) * 2017-04-25 2019-10-10 北京小米移動軟件有限公司Beijing Xiaomi Mobile Software Co.,Ltd. Method and apparatus for drawing room layout diagram

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003337900A (en) * 2002-05-20 2003-11-28 Nippon Telegr & Teleph Corp <Ntt> METHOD AND SYSTEM FOR SELLING PRODUCT Web-LINKED WITH THREE-DIMENSIONAL FLOOR PLAN OF HOUSE
JP2004033340A (en) * 2002-07-01 2004-02-05 Hitachi Home & Life Solutions Inc Robot vacuum cleaner and robot vacuum cleaner control program
JP2009169845A (en) * 2008-01-18 2009-07-30 Toyota Motor Corp Autonomous mobile robot and map update method
JP2014071847A (en) * 2012-10-01 2014-04-21 Sharp Corp Self-propelled electronic apparatus, electronic apparatus control system, and electronic apparatus control method
JP2019528487A (en) * 2017-04-25 2019-10-10 北京小米移動軟件有限公司Beijing Xiaomi Mobile Software Co.,Ltd. Method and apparatus for drawing room layout diagram

Also Published As

Publication number Publication date
JP2021140583A (en) 2021-09-16
JP7281716B2 (en) 2023-05-26

Similar Documents

Publication Publication Date Title
EP3727122B1 (en) Robot cleaners and controlling method thereof
KR102445064B1 (en) system of robot cleaning device
CN105310604A (en) Robot cleaning system and method of controlling robot cleaner
JPWO2019097626A1 (en) Self-propelled vacuum cleaner
TW201705897A (en) Mobile robot and control method thereof
JP6636260B2 (en) Travel route teaching system and travel route teaching method for autonomous mobile object
JP2018106695A (en) Method of operating autonomously traveling transfer vehicle
KR20160120841A (en) Smart cleaning system and method using a cleaning robot
WO2021176863A1 (en) Object image display device, object image display system, and object image display program
KR102397035B1 (en) Use of augmented reality to exchange spatial information with robotic vacuums
US20240126290A1 (en) Travel map creating apparatus, travel map creating method, and recording medium
JP2024124419A (en) Driving map creation device
KR20190003157A (en) Robot cleaner and robot cleaning system
JP6945144B2 (en) Cleaning information providing device and vacuum cleaner system
KR20200142865A (en) A robot cleaner using artificial intelligence and control method thereof
JP7411897B2 (en) Vacuum cleaning systems and vacuum cleaners
JP2022101947A (en) Mobile robot system, terminal device and map update program
CN115670304A (en) Cleaning machine
US20220031136A1 (en) Vacuum cleaner system and vacuum cleaner
WO2023157345A1 (en) Traveling map creation device, autonomous robot, method for creating traveling map, and program
JP7122573B2 (en) Cleaning information providing device
WO2022137796A1 (en) Travel map creation device, autonomous travel robot, travel control system for autonomous travel robot, travel control method for autonomous travel robot, and program
JP2019076658A (en) Autonomous travel cleaner and extension area identification method
KR20230134800A (en) A robot cleaner and control method thereof
JP2023075740A (en) Traveling map creation device, autonomous travel type robot, traveling map creation method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21764715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21764715

Country of ref document: EP

Kind code of ref document: A1