CN108290294B - Mobile robot and control method thereof - Google Patents
Mobile robot and control method thereof Download PDFInfo
- Publication number
- CN108290294B CN108290294B CN201680069236.8A CN201680069236A CN108290294B CN 108290294 B CN108290294 B CN 108290294B CN 201680069236 A CN201680069236 A CN 201680069236A CN 108290294 B CN108290294 B CN 108290294B
- Authority
- CN
- China
- Prior art keywords
- mobile robot
- obstacle
- information
- map
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 89
- 238000005259 measurement Methods 0.000 claims abstract description 7
- 239000000284 extract Substances 0.000 claims abstract description 3
- 230000033001 locomotion Effects 0.000 claims description 27
- 239000002245 particle Substances 0.000 claims description 20
- 238000003032 molecular docking Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 39
- 238000004364 calculation method Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 238000013507 mapping Methods 0.000 description 9
- 238000003860 storage Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000012937 correction Methods 0.000 description 7
- 238000005457 optimization Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000004807 localization Effects 0.000 description 4
- 239000001394 sodium malate Substances 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000000342 Monte Carlo simulation Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010297 mechanical methods and process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0248—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Multimedia (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
According to an aspect of the present disclosure, a mobile robot includes: a capturing unit configured to capture a three-dimensional (3D) image of the surroundings of the mobile robot and extract depth image information of the captured 3D image; an obstacle sensor configured to sense an obstacle using a 3D image captured by the capturing unit; a position estimator configured to estimate first position information of the mobile robot within a region excluding an obstacle region sensed by the obstacle sensor using the inertial measurement unit and the ranging; and a controller configured to calculate second position information of the mobile robot using the estimated first position information of the mobile robot and the extracted depth image information, and create a map while excluding an obstacle region sensed by the obstacle sensor.
Description
Technical Field
The present invention relates to a mobile robot and a control method thereof, and more particularly, to a technique for sensing an obstacle in front of the mobile robot using a three-dimensional (3D) space recognition sensor, and recognizing a position of the mobile robot and creating a map using information on the sensed obstacle.
Background
In the past, robots have generally been developed for industrial technology and used as components for factory automation. However, recently, as the number of application fields of robots increases, robots for medical purposes, robots for aerospace purposes, and the like are being developed. In addition, mobile robots for standard homes are being developed.
A mobile robot refers to a robot capable of operating according to a command from a user while autonomously moving through a desired area without manipulation by the user. Examples of mobile robots include cleaning robots, telepresence robots, safety robots, and the like.
Recently, application techniques using a mobile robot are being developed. For example, with the development of mobile robots having network functions, a function of issuing control commands to the mobile robot even when the mobile robot is away from a user, a function of monitoring environmental conditions, and the like are being developed. In addition, a technology of recognizing the position of the mobile robot and planning a moving path of the mobile robot by employing a camera or various types of sensors (such as an obstacle sensing sensor) in the mobile robot is being developed.
Therefore, for free movement of the mobile robot, it is necessary to accurately track the position of the mobile robot and create a map about an area through which the mobile robot is moving. This is because the moving path of the mobile robot can be planned when creating an accurate map, and work such as communication with a person can be performed.
Typically, Inertial Measurement Units (IMUs) and ranging are used to estimate the position of the mobile robot and create a map. Currently, simultaneous localization and mapping (SLAM) technology has been mainly used to estimate the position of a mobile robot and create a map in real time.
Disclosure of Invention
[ problem ] to provide a method for producing a semiconductor device
However, in the related art, in order to implement simultaneous localization and mapping (SLAM), an image processing algorithm and a three-dimensional (3D) coordinate extraction algorithm are additionally necessary, an obstacle sensor is additionally necessary in order to sense an obstacle, and the amount of calculation to manage an obstacle, geographical features, and the like is large. In addition, a map should be additionally created in order to plan a moving path of the mobile robot, and sensing an obstacle, estimating a position of the mobile robot, and planning the moving path of the mobile robot are independently performed, thereby increasing the amount of calculation and complicating the calculation.
[ technical solution ] A method for producing a semiconductor device
In order to solve the above problems, the present invention is directed to more efficiently estimating the position of a mobile robot and creating a map by sensing an obstacle, estimating the position of the mobile robot, and creating the map using a three-dimensional (3D) spatial recognition sensor.
[ advantageous effects ]
The present invention has an advantage in that sensing an obstacle, identifying a position of a mobile robot, creating a map, and planning a moving path of the mobile robot can be simultaneously performed, and thus an additional map is not required to create the map, and an obstacle can be identified without an additional obstacle sensor. When the position of the mobile robot is recognized and a map is created, the area recognized as the obstacle is not considered, so that the position of the mobile robot is recognized and the map is created more quickly. Further, when compared to an existing two-dimensional (2D) system, the accuracy of a map may be improved due to the use of a three-dimensional (3D) image, and thus a map-based robot service may be provided using the 3D image.
Drawings
Fig. 1 is a diagram illustrating an overall structure of a mobile robot system according to an embodiment of the present invention.
Fig. 2 is a diagram schematically illustrating an appearance of a mobile robot according to an embodiment of the present invention.
Fig. 3 is a diagram schematically illustrating a structure of a robot capable of positioning and mapping according to the related art.
Fig. 4 is a control block diagram of a mobile robot capable of positioning and mapping according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating a general structure of a general 3D space recognition sensor (3D laser range detection sensor) corresponding to a component of the capturing unit.
Fig. 6 is a diagram illustrating a general structure of a 3D depth sensor corresponding to components of a capturing unit.
Fig. 7 is a flowchart of a method of estimating a current position of a mobile robot and an order of creating a map according to an embodiment of the present invention.
Fig. 8 is a diagram illustrating a mobile robot and an obstacle located in an area that the mobile robot can sense.
Fig. 9 is a flowchart of an algorithm for a method for sensing an obstacle in the space of fig. 8, which is performed by a mobile robot.
Fig. 10 illustrates a method of determining a reference plane by a geometric method.
Fig. 11 is a graph illustrating data actually measured.
Fig. 12 is a diagram illustrating an approximation result obtained by the least square method.
Fig. 13 is a diagram illustrating an approximate result obtained by the RANSAC method.
Fig. 14 is a diagram illustrating a plane determined by the RANSAC method.
Fig. 15 is a diagram illustrating a method of sensing a moving obstacle according to another embodiment.
Fig. 16 is a diagram illustrating a case where the mobile robot cannot obtain a 3D image.
Fig. 17 is a diagram illustrating calculation of second position information of a mobile robot using a particle method.
Fig. 18 is a diagram illustrating calculation of second position information of a mobile robot based on first position information of the mobile robot and using a particle filter method.
Fig. 19 is a flowchart of a method of correcting a map according to an embodiment of the present invention.
Fig. 20 is a diagram illustrating loop closure, in which an actual movement path of the mobile robot and an estimated movement path of the mobile robot are illustrated.
Fig. 21 is a diagram illustrating a final map obtained according to the present invention.
Detailed Description
The embodiments set forth herein and the structures illustrated in the drawings are merely examples of the present invention, and various modified examples that can substitute for the embodiments and drawings may be made at the filing date of this application.
The same reference numerals or symbols used in the drawings of the present specification denote parts or components that substantially achieve the same functions.
The terminology used herein is for the purpose of describing embodiments only and is not intended to limit the scope of the present invention disclosed herein. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be understood that the terms "comprises," "comprising," "includes," "including," "has," "having," and the like, when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be further understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element without departing from the scope of the present invention. Similarly, a second element may be termed a first element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a diagram illustrating an overall structure of a mobile robot system according to an embodiment of the present invention.
In fig. 1, a mobile robot system 1 according to an embodiment of the present invention may include a mobile robot 100 configured to operate while autonomously moving within an area, a device 200 separate from the mobile robot 100 and configured to remotely control the mobile robot 100, and a charging station 300 separate from the mobile robot 100 and configured to charge a battery of the mobile robot 100.
The mobile robot 100 is a device configured to receive a control command for controlling the device 200 from a user and perform an operation corresponding to the control command, contains a rechargeable battery and an obstacle sensor for avoiding an obstacle during movement, and thus can operate while autonomously moving within a work area.
The mobile robot 100 may also include a camera or various types of sensors to recognize its surroundings. Therefore, the mobile robot 100 can recognize its position even when information on the surroundings of the mobile robot 100 is not obtained in advance, and perform positioning and map construction based on the information on the surroundings to create a map.
The device 200 is a remote control device configured to wirelessly transmit a control command to control the movement of the mobile robot 100 or perform an operation of the mobile robot 100. Accordingly, the device 200 may be a cellular phone or a Personal Communication Service (PCS) phone, a smart phone, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), a laptop computer, a digital broadcasting terminal, a netbook, a tablet PC, a navigation system, etc., but is not limited thereto, and may be any device capable of wirelessly transmitting a control command to perform an operation of the mobile robot 100.
In addition, the device 200 may include various types of devices capable of implementing various functions using various application programs, such as a digital still camera or a video camera having a wired/wireless communication function.
Alternatively, the apparatus 200 may be a general remote controller having a simple structure. Generally, the remote controller transmits a signal to the mobile robot 100 or receives a signal from the mobile robot 100 through infrared data association (IrDA).
Further, the device 200 may transmit and receive a wireless communication signal to and from the mobile robot 100 according to various methods such as Radio Frequency (RF), wireless fidelity (Wi-Fi), bluetooth, ZigBee, Near Field Communication (NFC), and Ultra Wideband (UWB) communication. Any method by which wireless communication signals can be exchanged between the apparatus 200 and the mobile robot 100 may be used.
The apparatus 200 may include a power button for turning on or off the mobile robot 100, a charge return button for instructing the mobile robot 100 to return to the charging station 300 in order to charge the battery of the mobile robot 100, a mode button for changing a control mode of the mobile robot 100, a start/stop button for starting, canceling, and confirming a control command to start or stop the operation of the mobile robot 100, a dial, and the like.
The charging station 300 is configured to charge a battery of the mobile robot 100, and may include a guide member (not shown) to guide the docking of the mobile robot 100. The guide member may include a connection terminal (not shown) to charge the power unit 130 shown in fig. 2 included in the mobile robot 100.
Fig. 2 is a diagram schematically showing an appearance of a mobile robot according to an embodiment of the present invention.
Referring to fig. 2, the mobile robot 100 may include a main body 110 forming an appearance of the mobile robot 100, a cover 120 configured to cover an upper portion of the main body 110, a power unit 130 configured to supply driving power for driving the main body 110, and a driver 140 configured to move the main body 110.
The main body 110 forms an external appearance of the mobile robot 100 and supports various components mounted in the mobile robot 100.
The power unit 130 may include a battery electrically connected to the driver 140 and various types of loads for driving the main body 110 to supply driving power thereto. The battery may be a rechargeable secondary battery, and is supplied with power and charged from the charging station 300 when the main body 110 completes the operation and is then coupled to the charging station 300.
The power unit 130 checks the remaining power of the battery, and is supplied with power and charged from the charging station 300 when it is determined that the remaining power of the battery is insufficient.
A caster wheel whose rotation angle is changed according to the state of the plane of the floor on which the mobile robot 100 is moving may be installed at the front of the main body 110 of the mobile robot 100. The caster wheels support the mobile robot 100 to stabilize the posture of the mobile robot 100 and prevent the mobile robot 100 from falling down during movement. The casters may be in the form of rollers or casters.
The drivers 140 may be provided at opposite sides of the central part of the body 110 to allow movement, e.g., forward movement, backward movement, rotational movement, etc., of the body 110 when performing work.
The two drivers 140 rotate in forward or backward directions so that the mobile robot 100 moves or rotates forward or backward when a user gives a specific command or during autonomously moving. For example, the two drivers 140 rotate in a forward or backward direction, so that the mobile robot 100 moves forward or backward. During the backward rotation of the left driver 140, the right driver 140 may rotate in a forward direction, so that the mobile robot 100 rotates in forward and leftward directions. The left driver 140 may rotate in a forward direction during the backward rotation of the right driver 140, so that the mobile robot 100 rotates in forward and right directions.
The general structure and operation principle of the mobile robot 100 have been described above. The localization and mapping performed by the mobile robot 100 according to the related art will be briefly described below, and the features of the present invention that solve the related art problems will be described in detail below.
Fig. 3 is a diagram schematically illustrating a structure of a robot capable of positioning and mapping according to the related art.
Currently, generally, the position of the mobile robot 100 is identified and a map is created by the SLAM method. SLAM stands for simultaneous localization and mapping, and is a technique for creating a map in real time when estimating the position of a mobile robot.
Referring to fig. 3, in the case of a mobile robot according to the related art, information on the position of the mobile robot and information on an obstacle are obtained using a vision sensor and a dead reckoning sensor, and SLAM is performed using the information.
However, in the related art, an image processing algorithm and a three-dimensional (3D) coordinate extraction algorithm are additionally required to perform SLAM, an obstacle sensor is additionally required to sense an obstacle, and the amount of calculation for managing an obstacle, geographical features, and the like is large. In addition, an additional map should be constructed to plan the moving path of the mobile robot.
In other words, sensing an obstacle, estimating the position of the mobile robot, and planning the movement path of the mobile robot are performed independently, and thus the calculation is large and complicated.
Accordingly, the present invention has been proposed to solve this problem, and is characterized in that these processes are not independently performed but simultaneously performed, thereby conveniently performing SLAM.
In other words, the present invention employs a 3D sensor instead of a vision sensor, and thus an obstacle can be directly sensed, and SLAM can be performed without an additional obstacle sensor. The identified obstacle area is not considered when estimating the position of the mobile robot and creating a map, thereby performing SLAM more efficiently.
Furthermore, since SLAM can be done using a 3D depth sensor, the present invention proposes a system that does not additionally require mapping to plan a path and uses 3D navigation. Thus, the accuracy of the map may be higher than existing 2D systems.
Fig. 4 is a control block diagram of the mobile robot 100 capable of positioning and mapping according to an embodiment of the present invention.
Referring to fig. 4, the mobile robot 100 according to the present invention may include a capturing unit 200, an obstacle sensor 300, a position estimator 400, and a controller 500. The capturing unit 200 may include a 3D spatial recognition sensor 210 and a 3D depth sensor. The position estimator 400 may comprise an inertial sensor unit IMU410 and a ranging unit 420. The controller 500 may include a location corrector 510, a map creator 520, a path planner 530, and a storage unit 540.
The capturing unit 200 is installed on the front surface of the mobile robot 100 to capture an image of the surroundings of the mobile robot 100.
Accordingly, a camera and a sensor for capturing an image of the surroundings of the mobile robot 100 may be provided. An omnidirectional image of a space through which the mobile robot 100 moves is captured in real time and provided to the obstacle sensor 300 and the position estimator 400.
The 3D space recognition sensor 210 may be a KINECT (RGB-D sensor), a time of flight (TOF) sensor (structured light sensor), a stereo camera, etc., but is not limited thereto, and may be any device having substantially the same function as the 3D space recognition sensor 210.
Fig. 5 is a diagram illustrating a general structure of a general 3D space recognition sensor (3D laser range detection sensor) corresponding to components of the capturing unit 200.
The 3D space recognition sensor 210 is a sensor configured to sense a signal obtained when light emitted from a light source hits an object and returns, and perform a series of numerical calculation processes to determine a distance. In general, the 3D space recognition sensor 200 can measure a distance three-dimensionally by rotation, vertical vibration, and inter-pitch vibration of reflectors installed in paths of light outgoing and incoming.
Referring to fig. 5, the 3D space recognition sensor 210 may include a laser range detection (LRF) structure 212 having a light source, a sensor, etc., a reflector 211 configured to reflect outgoing light and incoming light, a rotating member (not shown) configured to rotate the reflector 211, a vertical moving member (not shown) configured to control the inclination of the reflector 211, and a body 214 configured to perform scanning by reflecting light passing through the reflector 211 to emit light to be used for measuring a distance and by reflecting incoming light passing through the reflector 211 to receive incoming light returning from an object. The 3D space recognition sensor 210 further includes a member for rotating the reflector 211, and an additional driver 213 configured to control the tilt of the reflector 211.
The plurality of 3D space recognition sensors 210 as described above may be installed on the outer surface of the mobile robot 100. A plurality of light sources, a plurality of sensors, and a plurality of reflectors 211 may be installed in each of the 3D space recognition sensors 210.
Fig. 6 is a diagram illustrating a general structure of a 3D depth sensor corresponding to components of the capturing unit 200.
The 3D depth sensor 220 is a sensor that detects a depth image using infrared light. When the infrared light is emitted from the light emitting unit toward the object, the infrared light is reflected from the object and returned to the light receiving unit. The distance between the sensor and the object, i.e., the depth information (depth image) can be obtained using the time difference between the infrared light emission and the infrared light reflection. This method is commonly referred to as a time of flight (TOF) method (a calculation using the time of receipt and reflection of infrared light).
Referring to fig. 6, the 3D depth sensor 220 includes a light emitting part 221 that emits infrared light toward an object, a light receiving part 226 that receives infrared light reflected from the object, a pixel array 228 in which a plurality of depth pixels (detectors or sensors) 227 are arranged, and the like, and may further include a row decoder 222, a timing controller (T/C)223 that controls the time at which infrared light is emitted, a photogate controller (PG CON)224, a storage unit 225 that stores information about a captured depth image, and the like.
Accordingly, the 3D depth sensor 220 may recognize a distance between the object and the sensor based on such information, and thus may more accurately correct the position and the map of the robot using the recognized distance. The correction of the position of the robot and the map will be described below.
The obstacle sensor 300 senses an object near the mobile robot 100 by analyzing information on an image obtained from the 3D space recognition sensor 210. Here, the obstacle may be understood to include everything located above a specific plane within the moving range of the mobile robot 100, such as a doorsill, furniture, a human, an animal, and the like. The method of identifying the obstacle will be described in detail below.
The obstacle sensor 300 senses whether there is an obstacle by receiving a signal reflected when light emitted from the 3D space recognition sensor 210 hits the obstacle and returns, and determines whether the obstacle is nearby using a distance measurement signal from the 3D space recognition sensor 200. In addition, the obstacle sensor 300 may transmit information about the sensed obstacle to the position estimator 400 and the controller 500.
The position estimator 400 includes an inertial sensor unit IMU410 and a ranging unit 420, and estimates a current position (hereinafter, referred to as "first position information") of the mobile robot 100 using information received from the inertial sensor unit IMU410 and the ranging unit 420 based on information about an obstacle received from the obstacle sensor 300.
The inertial sensor unit IMU410 may use the IMU to estimate the position of the mobile robot 100.
The IMU is a sensor that senses an inertial force of a moving object and measures various types of information about the motion of the moving object, such as its acceleration, velocity direction, and distance, and is operated according to the principle of detecting an inertial force applied to an inertial object using an acceleration applied to the object.
IMUs may be classified into accelerometers and gyroscopes, and operate by various methods such as a method using a laser, a non-mechanical method, and the like.
Accordingly, the inertial sensor unit IMU410 may include inertial sensors using inertial inputs, such as acceleration sensors, inertial sensors, geomagnetic sensors, and the like. The acceleration sensor may include at least one of a piezoelectric acceleration sensor, a capacitive acceleration sensor, a strain gauge acceleration sensor, and the like.
The ranging unit 420 may estimate the current position and orientation of the mobile robot 100 using the ranging information.
Ranging refers to a method of identifying the position and orientation of the mobile robot 100, and is also called autonomous navigation.
The position and orientation of the mobile robot 100 to which the ranging is applied may be determined by acquiring information on the speed of the mobile robot 100 by an odometer or a wheel sensor, acquiring information on the orientation of the mobile robot 100 by a magnetic sensor or the like, and then calculating a moving distance from an initial position of the mobile robot 100 to its next position and information on a direction.
When ranging is used, the position of the mobile robot 100 may be determined using only information generated through ranging without having to receive additional information from the outside, and thus the structure of the system is relatively simple. Further, by ranging, the position information of the mobile robot 100 can be obtained at a very high sampling speed and thus updated rapidly. In addition, the accuracy of ranging over a relatively short distance is very high, and its cost is low.
The position estimator 400 estimates first position information of the mobile robot 100 while excluding information on an obstacle area received from the obstacle sensor 300 based on information on the mobile robot 100 obtained from the inertial sensor unit IMU410 and the ranging unit 420. The reason why the obstacle region is excluded is because the mobile robot 100 cannot pass through the obstacle region, and thus the obstacle region can be excluded from the space that is the target of the position estimation. When the obstacle region is excluded, the range in which the position of the mobile robot 100 is estimated is reduced, and therefore the position of the mobile robot 100 can be estimated more efficiently.
The position estimator 400 may transmit the first position information of the mobile robot 100 estimated as described above to the controller 500.
The controller 500 may receive information about an obstacle from the obstacle sensor 300 and first position information of the mobile robot 100 from the position estimator 400, and calculate more accurate current position information (hereinafter, referred to as "second position information") of the mobile robot 100 by correcting the position of the mobile robot 100 based on the received information and the first position information.
The controller 500 may create a map using the second position information of the mobile robot 100 and the obstacle region, and plan a moving path of the mobile robot 100 based on the estimated second position information of the mobile robot 100 and the created map.
In addition, the controller 500 may store the estimated position information of the mobile robot 100 and the created map.
Accordingly, the controller 500 may include a position corrector 510 to calculate second position information of the mobile robot 100, a map creator 520 to create a map based on the received information on the obstacle and the second position information of the mobile robot 100, a path planner 530 to plan a moving path of the mobile robot 100 based on the second position information of the mobile robot 100 and the created map, and a storage unit 540 to store the position information of the mobile robot 100 and the created map.
The position corrector 510 may more accurately correct the received first position information of the mobile robot 100.
The reason for correcting the estimated first position information of the mobile robot 100 is because ranging is designed to calculate the position and orientation of an object through integral science, and thus an increase in the moving distance may cause an increase in the measurement error, thereby greatly increasing the difference between the actual position of the mobile robot 100 and its estimated position.
Further, the information obtained via the inertial sensor unit IMU410 contains an error, and thus the first position information should be corrected to more accurately estimate the position of the mobile robot 100.
Accordingly, the position corrector 510 may calculate the second position information of the mobile robot 100 by comparing the information on the depth image detected by the 3D depth sensor 200 with the previous map stored in the storage unit 540 and correcting the first position information of the mobile robot 100.
In the method of correcting the position of the mobile robot 100, the second position information of the mobile robot 100 is calculated by distributing particles around the first position information of the mobile robot 100, calculating a matching score corresponding to each particle, and correcting the position of the mobile robot 100 to the most probable position.
The position corrector 510 corrects the position of the mobile robot 100 according to a probability-based filtering method, and thus may include a particle filter. The position corrector 510 may include at least one of a kalman filter, an Extended Kalman Filter (EKF), an Unscented Kalman Filter (UKF), an information filter, a histogram filter, etc., which have substantially the same function as the particle filter. The method of correcting the position of the mobile robot 100 and the particle filter will be described in detail below with reference to fig. 17.
The map creator 520 may create a map using the second position information of the mobile robot 100 calculated by the position corrector 510, the information about the obstacle sensed by the obstacle sensor 300, and the 3D image obtained from the 3D space recognition sensor 210.
The map creator 520 may create a map while excluding the obstacle region sensed by the obstacle sensor 300. The area identified as an obstacle has been sensed as an obstacle and is therefore not used to create a map. Thus, the map can be created more efficiently.
To increase the accuracy of the map created by the SLAM, the map creator 520 may correct the distortion of the map, i.e., perform attitude profile optimization. This is because, when the SLAM based on the particle filter is used, the map may be distorted when the map construction space is large.
Therefore, in order to prevent distortion of the map, the map creator 520 updates the moving path (posture graph) of the mobile robot 100 at a certain time interval while scanning the image of the position of the mobile robot 100 and the 3D image of the mobile robot 100. When the closing of the moving path of the mobile robot 100 (loop closing) is detected based on these images, the posture graph is optimized to minimize the distortion of the map. This process will be described in detail below.
Map creator 520 may detect loop closure using at least one of Wi-Fi, 3D image, visual data, and docking station, and contain a Wi-Fi communication module for performing the process and various types of devices for the docking station.
The storage unit 540 may store general information of the mobile robot 100, i.e., a map about an environment in which the mobile robot 100 operates, an operation program for operating the mobile robot 100, an operation mode of the mobile robot 100, position information of the mobile robot 100, information about obstacles obtained during movement of the mobile robot 100, and the like.
When the pose graph is optimized by map creator 520, the above information may be sent to map creator 520.
Accordingly, the storage unit 540 may include a non-volatile memory device such as a Read Only Memory (ROM), a programmable ROM (prom), an erasable programmable ROM (eprm), or a flash memory, a volatile memory such as a Random Access Memory (RAM), or a storage medium such as a hard disk, a card type memory (e.g., SD or XD memory), or an optical disk, to store various types of information. However, the storage unit 160 is not limited thereto, and any other various storage media that the designer can consider may be used.
The structure of the mobile robot 100 according to the present invention has been described above. An algorithm for recognizing the position of the mobile robot 100 and creating a map according to the present invention will be described below.
Fig. 7 is a flowchart of a method of estimating the current position of the mobile robot 100 and an order of creating a map according to an embodiment of the present invention.
When the mobile robot 100 moves, the mobile robot 100 obtains a 3D image by capturing an image of its surroundings using a 3D spatial recognition sensor, and simultaneously extracts a depth image of the 3D image (S100).
The 3D image may be obtained in real time and may be obtained by freely capturing images of a side view and a rear view of the mobile robot 100 and an image of a front view of the mobile robot 100.
Meanwhile, the 3D depth sensor 220 of the capturing unit 200 may capture an image of an area photographed by the 3D spatial recognition sensor and detect information on a depth image of the area.
The information on the depth image may be used to correct the estimated first position information of the mobile robot 100 and calculate second position information of the mobile robot 100.
When the 3D image is obtained, the 3D image is analyzed to sense an obstacle located near the mobile robot 100 (S200).
In the method of sensing an object, an obstacle region may be identified by dividing the bottom of an obtained 3D image and identifying only an object located above a specific plane. The process of sensing and identifying an obstacle will be described in detail below with reference to fig. 8 to 16.
When an obstacle area is sensed, the current position of the mobile robot 100 (i.e., the first position information of the mobile robot 100) is estimated using the ranging and the IMU while excluding the sensed obstacle area (S300).
As described above, the reason why the obstacle area is excluded is because the mobile robot 100 cannot move through the area where the obstacle exists. Therefore, by designating the obstacle region as a region where the mobile robot 100 cannot be located at the initial stage, the position of the mobile robot 100 can be estimated more quickly.
When the first position information of the mobile robot 100 is estimated in S300, the current position of the mobile robot 100 is corrected using the 3D depth image information. In other words, the second position information of the mobile robot 100, which is more accurate than the first position information of the mobile robot 100, is calculated (S400).
As described above, the current position of the mobile robot 100 is corrected by distributing particles around the estimated first position information of the mobile robot 100, calculating a matching score corresponding to each of the particles, and correcting the position of the mobile robot 100 to a high-possible position. The method will be described in detail below with reference to fig. 17.
When the second position information of the mobile robot 100 is calculated, a map of an area where the mobile robot 100 can move excluding the sensed obstacle area is created (S500).
In general, this process called SLAM may be performed based on the image obtained from the 3D space recognition sensor 210 and the second position information of the mobile robot 100.
When the map is created in S500, a map correction work (attitude graph optimization) may be performed to correct distortion of the map (S600).
As described above, in the particle filter-based SLAM, when the map construction space is large, the map may be distorted, and therefore it is necessary to perform the map correction work.
When the movement of the mobile robot 100 starts, the movement path (attitude graph) of the mobile robot 100 is updated by scanning the position of the mobile robot 100 and the 3D image of the periphery of the mobile robot 100. When the closing of the moving path of the mobile robot 100 (loop closing) is detected based on this information, the attitude profile optimization is performed to minimize the distortion of the map. This process will be described in detail below with reference to fig. 18 to 20.
When the distortion of the map is corrected and the map is completed, an optimized moving path of the mobile robot 100 is planned using the second position information of the mobile robot 100 and the completed map (S700).
The estimation of the position of the mobile robot 100 and the creation of the map according to the present invention are briefly described above with reference to fig. 7. The specific operation will be described below with reference to the drawings.
Fig. 8 to 15 are diagrams illustrating a method of sensing an obstacle (i.e., S200) in detail. Fig. 16 and 17 are diagrams illustrating the estimation of the position of the mobile robot 100 (S300 and S400). Fig. 18 to 21 are diagrams illustrating in detail the correction created map (S600).
Fig. 8 is a diagram illustrating the mobile robot 100 and an obstacle located in an area that the mobile robot 100 can sense.
As described above, the obstacle may be understood to mean everything located above a specific plane within the moving range of the mobile robot 100, such as a doorsill, furniture, a human being, an animal, and the like.
The reason why an object having a height higher than or equal to a specific plane is a criterion that an obstacle is determined as an obstacle is because, even when a small object that does not interrupt the movement of the mobile robot 100 is sensed as an obstacle, the amount of data is so large as to prevent the movement of the robot 100 and create a map, and thus it takes much time.
Referring to fig. 8, five objects A, B, C, D and E are in front of the mobile robot 100 located in a specific area. All five objects are located within a range that can be recognized by the 3D space recognition sensor 210. The drawings to be described below will be described with reference to objects A, B, C, D and E of fig. 8.
Fig. 9 is a flowchart of an algorithm of a method for sensing an obstacle in the space of fig. 8, which is performed by the mobile robot 100.
The mobile robot 100 obtains a 3D image of a front view of the mobile robot 100 using the 3D space recognition sensor 210, and recognizes all objects included in the obtained 3D image (S210).
However, as described above, all recognizable objects are not obstacles, and thus the mobile robot 100 recognizes only an object corresponding to an obstacle as an obstacle using the bottom division method (S220).
The bottom division method is a method of setting a specific plane as a reference point and determining only an object having a height higher than the specific plane among all objects recognized by the mobile robot 100 as an obstacle.
There are two available bottom partitioning methods: a geometric information method and a random sample consensus (RANSAC) method are used, as will be described in detail below.
When the reference plane is determined by the bottom division method, only the obstacle selected according to the reference plane is determined as the obstacle (S230).
Then, the objects determined as the obstacles are expressed using a point cloud (point cloud), and then information on the obstacles is obtained by performing 3D grouping and marking to recognize the objects as the same obstacles (S240 and S250).
The 3D grouping and labeling is done to easily track moving obstacles, as the obstacles are not all static and can be dynamic. The tracker may be used to track dynamic obstacles. The dynamic barrier will be described below with reference to fig. 15.
Fig. 10 to 14 are diagrams illustrating the bottom division method (i.e., S220). Fig. 10 illustrates a method of determining a reference plane by a geometric method. Fig. 11 to 14 are diagrams for explaining a reference plane determined by the RANSAC method.
Fig. 10 is a perspective view of the mobile robot 100 and an obstacle when viewed from the side of fig. 9.
Referring to fig. 10, the mobile robot 100 may sense an object in front of the mobile robot 100 via a 3D space recognition sensor 200 installed in front of the mobile robot 100. Here, the geometric method is a method of designating a specific height with reference to the 3D space recognition sensor 200 and setting a plane corresponding to the specific height as a specific plane.
In fig. 10, when there is a horizontal line h corresponding to a position where the 3D space recognition sensor 210 is present, a plane spaced downward from the horizontal line h by a certain distance D1 or D2 may be a reference plane as a standard for determining an obstacle.
When the reference plane for determining the obstacle is set to a height spaced downward from the horizontal line h by the distance d1, the height of the object a 310 is smaller than that of the reference plane, and therefore the object a 310 is not recognized as the obstacle. However, the heights of the objects B320, C330, D340, and E350 are greater than the height of the reference plane, and thus the objects B320, C330, D340, and E350 may be recognized as obstacles.
When the reference plane for determining an obstacle is set to a height spaced downward from the horizontal line h by the distance d2, the objects a 310 and B320 are located below the reference plane and thus are not recognized as obstacles. However, the heights of the objects C330, D340, and E350 are greater than the height of the reference plane, and thus the objects 330, D340, and E350 are identified as obstacles.
In general, the horizontal line h may be set at a height at which an object that does not interrupt the movement of the mobile robot 100 is not recognized as an obstacle, and may be set at a height desired by the user.
Fig. 11 to 14 are diagrams illustrating a RANSAC method of another method of setting a plane for sensing an obstacle.
Fig. 11 is a graph illustrating data actually measured. Fig. 12 is a diagram illustrating an approximation result obtained by the least square method. Fig. 13 is a diagram illustrating an approximate result obtained by the RANSAC method. Fig. 14 is a diagram illustrating a plane determined by the RANSAC method.
The RANSAC method represents a random sample consensus method and is a method of randomly selecting sample data and selecting data having the maximum coherence.
More specifically, pieces of sample data are randomly selected, model parameters satisfying the pieces of sample data are calculated, and the number of pieces of data located near each model parameter is calculated. When the number of segments of data is large, the model corresponding thereto is stored. This process is repeated N times and the model with the highest rate of agreement is returned and expressed as the final result.
Although the least square method is more convenient to use than the RANSAC method, the least square method causes a desired result when the number of errors contained in data or the amount of noise is small, but causes a result that does not match actual data when the amount of noise contained in data is large (as shown in fig. 12), and thus data is distorted (as shown in fig. 11). However, when the RANSAC method is used and then approximation is performed, a desired result as shown in fig. 13 can be obtained.
As shown in fig. 14, extracting the plane includes creating an initial model thereof using the RANSAC method with reference to the point cloud (i.e., the set of segmented pixels) corresponding to the segmented pixels. Here, the plane can be represented by two angle parameters α and β representing vectors perpendicular to the plane and a perpendicular distance d from the origin of the plane.
Further, to represent a plane, an error model (i.e., error variance) such as an angle or a distance may be used, and a gaussian distribution of 3D points corresponding to a set of pixels used to obtain the plane may be used.
Fig. 15 is a diagram illustrating a method of sensing a moving obstacle according to another embodiment.
Referring to fig. 15, an object C330 is a static thing but is moved by an external force, and represents the movement of a dog 360, i.e., the movement of a dynamic obstacle.
The obstacle is not always static and therefore also dynamic obstacles should be identified. However, it is inefficient to recognize a small-amplitude moving obstacle as a new obstacle and perform obstacle sensing again. Thus, as described above, moving obstacles can be continuously tracked by grouping and marking obstacles to be recognized as one obstacle. In other words, then, even the movement of a static object or a dynamic object can be recognized as an obstacle, and therefore obstacle recognition can be performed more conveniently and quickly.
Fig. 16 and 17 are diagrams illustrating estimation of the position of the mobile robot 100. Fig. 16 is a diagram illustrating a case where the mobile robot cannot obtain a 3D image. Fig. 17 is a diagram illustrating calculation of second position information of the mobile robot 100 using a particle method.
Referring to fig. 8, the mobile robot 100 estimates first position information of the mobile robot 100 using the IMU and the ranging, and calculates second position information of the mobile robot 100 using depth image information obtained through the 3D depth sensor 210. Accordingly, depth image information may be obtained from the 3D depth sensor 210 to calculate second position information of the mobile robot 100.
However, as shown in fig. 16, the 3D depth sensor 210 may not always be able to obtain a 3D image. A series of sensing by the 3D depth sensor 210 is limited. Therefore, in the case of fig. 16, 3D image information of the obstacles 330, 340, and 350 far from the mobile robot 100 may not be obtained.
In this case, the first position information of the mobile robot 100 cannot be corrected using the depth image information, and thus may be output as the second position information of the mobile robot 100.
Fig. 18 is a diagram illustrating calculation of second position information of the mobile robot 100 using a particle filter method based on the first position information of the mobile robot 100.
The second location information of the mobile robot 100 is calculated by distributing particles around the first location information, calculating a matching score corresponding to each particle, and correcting the location of the mobile robot 100 to the most likely location.
Since a probability-based filtering method is used for the correction, a bayesian filter can be generally used. A bayesian filter is a probability-based filter based on bayesian theorem. In bayesian theorem, the likelihood function and prior probability distribution are used for the posterior probability.
A representative example of the bayesian filter method is a particle filter method. The particle filter method is a simulation method based on trial and error, also known as the Sequential Monte Carlo (SMC) method.
The monte carlo method is a method of collecting a sufficiently large number of random input results and randomly calculating the value of a function. In the monte carlo method, the characteristics of the system are identified by randomly calculating the values of the functions.
Therefore, as shown in fig. 17, the mobile robot 100 can calculate its more accurate position, i.e., its second position information, by distributing many particles around the mobile robot 100 during movement.
The method of sensing an obstacle and estimating the position of the mobile robot 100 performed by the mobile robot 100 has been described above. A method of creating and correcting a map by the mobile robot 100 will be described below.
When the second position information of the mobile robot 100 is calculated, a map is created based on the calculated second position information, the obstacle region sensed by the obstacle sensor 300, and the 3D image information obtained from the 3D image recognition sensor 210.
As described above, the mobile robot 100 creates a map while excluding the area identified as the obstacle. Therefore, it is not necessary to obtain information about the obstacle region, and thus the map can be created more efficiently.
However, when creating and updating a map is performed by the mobile robot 100 in real time, an error between the map and an actual map may occur. In other words, map distortion may occur due to an odometer, a measuring instrument, and the like of the mobile robot 100, thereby causing a difference between a map created by the mobile robot 100 and an actual map. Such map distortion is illustrated in fig. 18.
Referring to fig. 18, reference numeral 700 denotes an actual map of an environment in which the mobile robot 100 is moving. However, as described above, the map created by the mobile robot 100 may be substantially the same as the map 710 or 720 due to errors occurring in the measurement data. Therefore, when the moving path of the mobile robot 100 is planned based on the created map, the moving path is not accurate. Therefore, such map distortion should be corrected.
Fig. 19 is a flowchart of a method of correcting a map according to an embodiment of the present invention.
Referring to fig. 19, the mobile robot 100 performs SLAM in real time during movement (S1000). The result of performing SLAM is scanned and stored in the scan data storage unit (S1100), and at the same time, the movement path (attitude graph) of the mobile robot 100 is updated (S1200). Here, the term "scan data" may be understood to mean geometric information obtained by scanning an area through which the mobile robot 100 is moving.
The mobile robot 100 detects whether the moving path of the mobile robot 100 is closed (loop is closed) when analyzing the posture graph periodically updated as described above (S1300). Then, when the loop closure is detected, the loop closure may be used for map correction (attitude profile optimization) (S1400).
Here, the term "loop closure" may be understood to mean reliable initial position information identifying the mobile robot 100 when the mobile robot 100 moves at a certain magnitude and returns to an initial position, and may also be referred to as a re-visit or a closed loop.
Fig. 20 is a diagram illustrating a loop closure, in which an actual movement path of the mobile robot 100 and an estimated movement path of the mobile robot 100 are illustrated.
As shown in fig. 20, the mobile robot 100 starts moving at point X0 and moves to point X1, and finally moves to point X10 while updating the attitude graph. However, as shown in fig. 20, when the mobile robot 100 moves a long distance and returns, an error may occur due to the SLAM, and thus the mobile robot 100 cannot return to an accurate original position.
Therefore, a correction map created by synthesizing the initial position information of the mobile robot 100 and the position information thereof when the mobile robot 100 revisits the position near the initial position is required. This process is called attitude profile optimization.
Loop closure may be detected by comparing information about a characteristic point of an image captured by a camera with newly input scan data using geometric information obtained by a sensor (such as a laser sensor or an ultrasonic sensor), or by comparing previously stored scan data with newly input scan data using geometric information obtained by a sensor (such as a laser sensor or an ultrasonic sensor).
Alternatively, loop closure may be detected through communication such as Wi-Fi or using a docking station.
Fig. 21 is a diagram illustrating a final map obtained according to the present invention.
Fig. 21 illustrates a two-dimensional (2D) grid map, but a 3D grid map may be created. The grid map is a map that randomly represents the probability that an object is located in each of the grids when the image of the periphery of the mobile robot 100 is divided into small grids, and is also referred to as a probability grid map.
Referring to fig. 21, the mobile robot 100 moves within a certain area while drawing a loop 610 and corrects its position in real time while distributing particles 600, and an obstacle 630 is detected by the obstacle sensor 300.
In fig. 21, reference numeral 620 denotes an area sensed as a wall by the 3D space recognition sensor 210, and the area 620 is actually a wall.
The technology for recognizing the position of the mobile robot and creating a map according to the present invention has been described above. According to the present invention, sensing an obstacle, recognizing a position of a mobile robot, creating a map, and planning a moving path of the mobile robot may be performed using a 3D image. Thus, SLAM may proceed more quickly and accurately since no additional maps are needed to create a map and areas identified as obstacles are not used for SLAM. Furthermore, an additional obstacle sensor is not required, and thus sensing various types of obstacles, performing SLAM, and planning a path can be performed using a 3D image.
In addition, in the 3D navigation system as described above, the accuracy of the map is improved when compared to the existing 2D system, and thus a map-based robot service can be provided using the map.
Claims (14)
1. A mobile robot, comprising:
a capturing unit configured to capture a three-dimensional (3D) image of the surroundings of the mobile robot and extract depth image information of the captured 3D image;
a position estimator configured to estimate current position information of the mobile robot within an area;
a controller configured to calculate second position information of the mobile robot using the estimated current position information of the mobile robot and the extracted depth image information, and create a map;
the mobile robot further includes an obstacle sensor configured to receive the 3D image from the capturing unit and sense an object having a height higher than a certain plane as an obstacle by analyzing information on the 3D image captured by the capturing unit;
the position estimator is further configured to estimate the current position information by excluding an obstacle region sensed by the obstacle sensor using an inertial measurement unit and a ranging unit; and
the controller is further configured to create a map when excluding an obstacle region sensed by the obstacle sensor.
2. The mobile robot of claim 1, wherein the particular plane comprises one of a lowest portion of the mobile robot and a particular geometric plane set by a user.
3. The mobile robot as claimed in claim 1, wherein the specific plane is set by a random sample consensus (RANSAC) method.
4. The mobile robot of claim 1, wherein the obstacle sensor tracks the sensed obstacle using a tracker when there is a moving obstacle among the sensed obstacles.
5. The mobile robot of claim 1, wherein the controller sets current position information of the mobile robot as second position information of the mobile robot when depth image information of the 3D image is not extracted.
6. The mobile robot of claim 1, wherein the controller calculates the second position information of the mobile robot using depth image information of the 3D image and a previously stored map.
7. The mobile robot of claim 6, wherein the controller calculates the second location information of the mobile robot from the current location information of the mobile robot using a probability-based filtering method using a bayesian filter.
8. The mobile robot of claim 7, wherein the bayesian filter comprises at least one of a kalman filter, an Extended Kalman Filter (EKF), an Unscented Kalman Filter (UKF), an information filter, a histogram filter, and a particle filter.
9. The mobile robot of claim 1, wherein the controller corrects the map using a loop closure when the loop closure is detected from second position information of the mobile robot.
10. The mobile robot of claim 9, wherein the controller senses the loop closure using at least one of Wi-Fi, 3D information, visual data, and docking station.
11. The mobile robot of claim 1, wherein the controller comprises a path planner configured to plan a movement path of the mobile robot based on the second location information of the mobile robot and the created map.
12. A method of controlling a mobile robot, the method comprising:
capturing a three-dimensional (3D) image of the surroundings of the mobile robot, and extracting depth image information of the captured 3D image;
estimating current location information of the mobile robot within an area;
calculating second position information of the mobile robot using the estimated current position information of the mobile robot and the extracted depth image information;
creating a map;
the method further comprises the following steps:
sensing an object having a height higher than a specific plane as an obstacle by analyzing information on the 3D image captured by the capturing unit;
estimating the current position information by excluding an obstacle region sensed by an obstacle sensor using an inertial measurement unit and a ranging unit; and
a map is created when the sensed obstacle area is excluded.
13. The method of claim 12, wherein the calculating of the second location information of the mobile robot includes calculating the second location information of the mobile robot using depth image information of the 3D image and a previously stored map.
14. The method of claim 12, wherein the creating of the map includes, when a loop closure is detected from the second location information of the mobile robot, using the loop closure to correct the map.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150166310A KR102403504B1 (en) | 2015-11-26 | 2015-11-26 | Mobile Robot And Method Thereof |
KR10-2015-0166310 | 2015-11-26 | ||
PCT/KR2016/013630 WO2017091008A1 (en) | 2015-11-26 | 2016-11-24 | Mobile robot and control method therefor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108290294A CN108290294A (en) | 2018-07-17 |
CN108290294B true CN108290294B (en) | 2022-05-10 |
Family
ID=58763612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680069236.8A Active CN108290294B (en) | 2015-11-26 | 2016-11-24 | Mobile robot and control method thereof |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP3367199B1 (en) |
KR (1) | KR102403504B1 (en) |
CN (1) | CN108290294B (en) |
WO (1) | WO2017091008A1 (en) |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106643692A (en) * | 2016-09-28 | 2017-05-10 | 深圳乐行天下科技有限公司 | Robot navigation and positioning method, system and robot |
KR102326077B1 (en) * | 2017-06-15 | 2021-11-12 | 엘지전자 주식회사 | Method of identifying movable obstacle in 3-dimensional space and robot implementing thereof |
CN109634267B (en) * | 2017-10-09 | 2024-05-03 | 北京瑞悟科技有限公司 | Be used for market supermarket intelligence to choose goods delivery robot |
WO2019089018A1 (en) | 2017-10-31 | 2019-05-09 | Hewlett-Packard Development Company, L.P. | Mobile robots to generate reference maps for localization |
CN107909614B (en) * | 2017-11-13 | 2021-02-26 | 中国矿业大学 | Positioning method of inspection robot in GPS failure environment |
KR102051046B1 (en) * | 2017-11-21 | 2019-12-02 | 유진기술 주식회사 | Following system for solar panel cleaning robot of mobile robot and method thereof |
WO2019104693A1 (en) * | 2017-11-30 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Visual sweeping robot and method for constructing scene map |
CN108731736B (en) * | 2018-06-04 | 2019-06-14 | 山东大学 | Wall radar photoelectricity robot system is climbed automatically for bridge tunnel Structural defect non-destructive testing diagnosis |
KR102601141B1 (en) | 2018-06-22 | 2023-11-13 | 삼성전자주식회사 | mobile robots and Localization method using fusion image sensor and multiple magnetic sensors |
KR102629036B1 (en) * | 2018-08-30 | 2024-01-25 | 삼성전자주식회사 | Robot and the controlling method thereof |
CN110967703A (en) * | 2018-09-27 | 2020-04-07 | 广东美的生活电器制造有限公司 | Indoor navigation method and indoor navigation device using laser radar and camera |
KR101948728B1 (en) | 2018-09-28 | 2019-02-15 | 네이버랩스 주식회사 | Method and system for collecting data |
CN109489660A (en) * | 2018-10-09 | 2019-03-19 | 上海岚豹智能科技有限公司 | Robot localization method and apparatus |
CN109556616A (en) * | 2018-11-09 | 2019-04-02 | 同济大学 | A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method |
KR20200075140A (en) * | 2018-12-12 | 2020-06-26 | 엘지전자 주식회사 | Artificial intelligence lawn mover robot and controlling method for the same |
US10991117B2 (en) | 2018-12-23 | 2021-04-27 | Samsung Electronics Co., Ltd. | Performing a loop closure detection |
CN111399492A (en) * | 2018-12-28 | 2020-07-10 | 深圳市优必选科技有限公司 | Robot and obstacle sensing method and device thereof |
CN111376258B (en) * | 2018-12-29 | 2021-12-17 | 纳恩博(常州)科技有限公司 | Control method, device, equipment and storage medium |
CN110110245B (en) * | 2019-05-06 | 2021-03-16 | 山东大学 | Dynamic article searching method and device in home environment |
CN110207707B (en) * | 2019-05-30 | 2022-04-12 | 四川长虹电器股份有限公司 | Rapid initial positioning method based on particle filter and robot equipment |
US11363461B2 (en) | 2019-08-23 | 2022-06-14 | Electronics And Telecommunications Research Institute | Method for managing security key of mobile communication system, and apparatus therefor |
CN110928301B (en) * | 2019-11-19 | 2023-06-30 | 北京小米智能科技有限公司 | Method, device and medium for detecting tiny obstacle |
KR102295824B1 (en) | 2019-12-06 | 2021-08-31 | 엘지전자 주식회사 | Mapping method of Lawn Mower Robot. |
CN111399505B (en) * | 2020-03-13 | 2023-06-30 | 浙江工业大学 | Mobile robot obstacle avoidance method based on neural network |
CN113446971B (en) * | 2020-03-25 | 2023-08-08 | 扬智科技股份有限公司 | Space recognition method, electronic device and non-transitory computer readable storage medium |
CN111538329B (en) * | 2020-04-09 | 2023-02-28 | 北京石头创新科技有限公司 | Image viewing method, terminal and cleaning machine |
CN111595328B (en) * | 2020-06-01 | 2023-04-25 | 四川阿泰因机器人智能装备有限公司 | Real obstacle map construction and navigation method and system based on depth camera |
CN111966088B (en) * | 2020-07-14 | 2022-04-05 | 合肥工业大学 | Control system and control method for automatically-driven toy car for children |
TWI749656B (en) * | 2020-07-22 | 2021-12-11 | 英屬維爾京群島商飛思捷投資股份有限公司 | System for establishing positioning map and method thereof |
CN111898557B (en) * | 2020-08-03 | 2024-04-09 | 追觅创新科技(苏州)有限公司 | Map creation method, device, equipment and storage medium of self-mobile equipment |
CN111949929B (en) * | 2020-08-12 | 2022-06-21 | 智能移动机器人(中山)研究院 | Design method of multi-sensor fusion quadruped robot motion odometer |
CN112348893B (en) * | 2020-10-30 | 2021-11-19 | 珠海一微半导体股份有限公司 | Local point cloud map construction method and visual robot |
CN112330808B (en) * | 2020-10-30 | 2024-04-02 | 珠海一微半导体股份有限公司 | Optimization method based on local map and visual robot |
CN112587035B (en) * | 2020-12-08 | 2023-05-05 | 珠海一微半导体股份有限公司 | Control method and system for identifying working scene of mobile robot |
CN112799095B (en) * | 2020-12-31 | 2023-03-14 | 深圳市普渡科技有限公司 | Static map generation method and device, computer equipment and storage medium |
CN113552585B (en) * | 2021-07-14 | 2023-10-31 | 浙江大学 | Mobile robot positioning method based on satellite map and laser radar information |
CN113538579B (en) * | 2021-07-14 | 2023-09-22 | 浙江大学 | Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information |
CN113643568B (en) * | 2021-07-22 | 2022-11-15 | 吉林大学 | Vehicle intersection collision avoidance system and method based on unmanned aerial vehicle |
KR102654852B1 (en) * | 2021-11-11 | 2024-04-05 | 한국과학기술원 | Mobile robot and position estimation method thereof |
WO2024058402A1 (en) * | 2022-09-15 | 2024-03-21 | 삼성전자주식회사 | Traveling robot for generating travel map and control method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101000507A (en) * | 2006-09-29 | 2007-07-18 | 浙江大学 | Method for moving robot simultanously positioning and map structuring at unknown environment |
WO2009148672A9 (en) * | 2008-03-13 | 2010-01-28 | Battelle Energy Alliance, Llc | System and method for seamless task-directed autonomy for robots |
KR101371038B1 (en) * | 2011-10-26 | 2014-03-10 | 엘지전자 주식회사 | Mobile robot and method for tracking target of the same |
CN104536445A (en) * | 2014-12-19 | 2015-04-22 | 深圳先进技术研究院 | Mobile navigation method and system |
CN105091884A (en) * | 2014-05-08 | 2015-11-25 | 东北大学 | Indoor moving robot route planning method based on sensor network dynamic environment monitoring |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100988568B1 (en) * | 2008-04-30 | 2010-10-18 | 삼성전자주식회사 | Robot and method for building map of the same |
CN102656532B (en) * | 2009-10-30 | 2015-11-25 | 悠进机器人股份公司 | For ground map generalization and the update method of position of mobile robot identification |
KR101715780B1 (en) * | 2010-10-11 | 2017-03-13 | 삼성전자주식회사 | Voxel Map generator And Method Thereof |
KR101750340B1 (en) * | 2010-11-03 | 2017-06-26 | 엘지전자 주식회사 | Robot cleaner and controlling method of the same |
KR20120070291A (en) * | 2010-12-21 | 2012-06-29 | 삼성전자주식회사 | Walking robot and simultaneous localization and mapping method thereof |
KR101985188B1 (en) * | 2012-09-20 | 2019-06-04 | 엘지전자 주식회사 | Moving robot and driving method for the moving robot |
-
2015
- 2015-11-26 KR KR1020150166310A patent/KR102403504B1/en active IP Right Grant
-
2016
- 2016-11-24 WO PCT/KR2016/013630 patent/WO2017091008A1/en active Application Filing
- 2016-11-24 CN CN201680069236.8A patent/CN108290294B/en active Active
- 2016-11-24 EP EP16868904.0A patent/EP3367199B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101000507A (en) * | 2006-09-29 | 2007-07-18 | 浙江大学 | Method for moving robot simultanously positioning and map structuring at unknown environment |
WO2009148672A9 (en) * | 2008-03-13 | 2010-01-28 | Battelle Energy Alliance, Llc | System and method for seamless task-directed autonomy for robots |
KR101371038B1 (en) * | 2011-10-26 | 2014-03-10 | 엘지전자 주식회사 | Mobile robot and method for tracking target of the same |
CN105091884A (en) * | 2014-05-08 | 2015-11-25 | 东北大学 | Indoor moving robot route planning method based on sensor network dynamic environment monitoring |
CN104536445A (en) * | 2014-12-19 | 2015-04-22 | 深圳先进技术研究院 | Mobile navigation method and system |
Also Published As
Publication number | Publication date |
---|---|
KR20170061373A (en) | 2017-06-05 |
KR102403504B1 (en) | 2022-05-31 |
EP3367199A1 (en) | 2018-08-29 |
WO2017091008A1 (en) | 2017-06-01 |
EP3367199B1 (en) | 2020-05-06 |
EP3367199A4 (en) | 2018-08-29 |
CN108290294A (en) | 2018-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108290294B (en) | Mobile robot and control method thereof | |
CN111837083B (en) | Information processing apparatus, information processing method, and storage medium | |
US10096129B2 (en) | Three-dimensional mapping of an environment | |
CN110522359B (en) | Cleaning robot and control method of cleaning robot | |
US10102429B2 (en) | Systems and methods for capturing images and annotating the captured images with information | |
CN110023867B (en) | System and method for robotic mapping | |
CN110312912B (en) | Automatic vehicle parking system and method | |
CN108051002B (en) | Transport vehicle space positioning method and system based on inertial measurement auxiliary vision | |
US11568559B2 (en) | Localization of a surveying instrument | |
US10921820B2 (en) | Movable object and control method thereof | |
US20200306989A1 (en) | Magnetometer for robot navigation | |
CN110967711A (en) | Data acquisition method and system | |
KR100901311B1 (en) | Autonomous mobile platform | |
CN112740274A (en) | System and method for VSLAM scale estimation on robotic devices using optical flow sensors | |
Chatterjee et al. | Mobile robot navigation | |
CN108544494B (en) | Positioning device, method and robot based on inertia and visual characteristics | |
CN110597265A (en) | Recharging method and device for sweeping robot | |
KR102471487B1 (en) | Cleaning robot and controlling method thereof | |
Tiozzo Fasiolo et al. | Combining LiDAR SLAM and deep learning-based people detection for autonomous indoor mapping in a crowded environment | |
KR102249485B1 (en) | System and method for autonomously traveling mobile robot | |
US20230316567A1 (en) | Localization of a surveying instrument | |
Kim et al. | Design and implementation of mobile indoor scanning system | |
Khan et al. | Multi-sensor SLAM for efficient navigation of a mobile robot | |
US11829154B1 (en) | Systems and methods for robotic navigation, teaching and mapping | |
Zhao et al. | People following system based on lrf |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |