EP3946841A1 - Robot mobile et son procédé de commande - Google Patents

Robot mobile et son procédé de commande

Info

Publication number
EP3946841A1
EP3946841A1 EP20776680.9A EP20776680A EP3946841A1 EP 3946841 A1 EP3946841 A1 EP 3946841A1 EP 20776680 A EP20776680 A EP 20776680A EP 3946841 A1 EP3946841 A1 EP 3946841A1
Authority
EP
European Patent Office
Prior art keywords
lidar
information
sensor
mobile robot
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20776680.9A
Other languages
German (de)
English (en)
Other versions
EP3946841A4 (fr
Inventor
Dongki Noh
Jaekwang Lee
Seungwook LIM
Kahyung CHOI
Gyuho Eoh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of EP3946841A1 publication Critical patent/EP3946841A1/fr
Publication of EP3946841A4 publication Critical patent/EP3946841A4/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2836Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means characterised by the parts which are controlled
    • A47L9/2852Elements for displacement of the vacuum cleaner or the accessories therefor, e.g. wheels, casters or nozzles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/022Optical sensing devices using lasers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • B25J11/0085Cleaning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Definitions

  • the present invention relates to a mobile robot and a method of controlling the same, and more particularly to technology of a mobile robot creating or learning a map or recognizing a position on the map.
  • Robots have been developed for industrial purposes and have taken charge of a portion of factory automation. In recent years, the number of fields in which robots are utilized has increased. As a result, a medical robot and an aerospace robot have been developed. In addition, a home robot usable at home is being manufactured. Among such robots, a robot capable of autonomously traveling is called a mobile robot.
  • a typical example of a mobile robot used at home is a robot cleaner.
  • the robot cleaner is an apparatus that cleans a predetermined region by sucking dust or foreign matter in the predetermined region while traveling autonomously.
  • the mobile robot is capable of moving autonomously and thus moving freely, and may be provided with a plurality of sensors for evading an obstacle, etc. during traveling in order to travel while evading the obstacle.
  • a map of a traveling zone must be accurately created in order to perform a predetermined task, such as cleaning, and the current location of the mobile robot on the map must be accurately determined in order to move to a specific point in the traveling zone.
  • the mobile robot cannot recognize the unknown current location based on traveling information at the preceding location.
  • a kidnapping situation in which a user lifts and transfers the mobile robot that is traveling may occur.
  • a prior document discloses technology of creating a three-dimensional map using feature points extracted from an image captured in a traveling zone and recognizing an unknown current location using a feature point based an image captured through a camera at the current location.
  • the three-dimensional map is created using the feature points extracted from the image captured in the traveling zone, and three or more pairs of feature points matched with the feature points in the three-dimensional map are detected from among feature points in an image captured at the unknown current location. Subsequently, by using two-dimensional coordinates of three or more matched feature points in an image captured at the current location, three-dimensional coordinates of three or more matched feature points in the three-dimensional map, and information about the focal distance of the camera at the current location, the distance is calculated from the three or more matched feature points, whereby the current location is recognized.
  • a method of comparing any one image obtained by capturing the same portion in the traveling zone with a recognition image to recognize the location from the feature point of a specific point has a problem in that accuracy in estimating the current location may vary due to environmental changes, such as on/off of lighting in the traveling zone, or illuminance change depending on the incidence angle or amount of sunlight.
  • a method of comparing any one image obtained by capturing the same portion in the traveling zone with a recognition image to recognize the location from the feature point of a specific point has a problem in that accuracy in estimating the current location may vary due to environmental changes, such as on/off of lighting in the traveling zone, illuminance change depending on the incidence angle or amount of sunlight, and object location change. It is an object of the present invention to provide location recognition and map creation technology robust to such environmental changes.
  • SLAM simultaneous localization and mapping
  • LiDAR light detection and ranging
  • a mobile robot and a method of controlling the same are capable of creating a map robust to environmental change and accurately recognizing the location on the map by complementarily using different kinds of data acquired utilizing different kinds of sensors.
  • a mobile robot and a method of controlling the same are capable of realizing SLAM technology robust to various environmental changes, such as changes in illuminance and object location, by effectively fusing vision-based location recognition technology using a camera and light detection and ranging (LiDAR)-based location recognition technology using a laser.
  • LiDAR light detection and ranging
  • a mobile robot and a method of controlling the same are capable of performing efficient traveling and cleaning based on a single map capable of coping with various environmental changes.
  • a mobile robot including a traveling unit configured to move a main body, a LiDAR sensor configured to acquire geometry information outside the main body, a camera sensor configured to acquire an image of the outside of the main body, and a controller configured to create odometry information based on sensing data of the LiDAR sensor and to perform feature matching between images input from the camera sensor base on the odometry information in order to estimate a current location, whereby the camera sensor and the LiDAR sensor may be effectively fused to accurately perform location estimation.
  • the mobile robot may further include a traveling sensor configured to sense a traveling state based on movement of the main body, wherein the controller may fuse sensing data of the traveling sensor and the result of iterative closest point (ICP) matching of the LiDAR sensor to create the odometry information.
  • a traveling sensor configured to sense a traveling state based on movement of the main body
  • the controller may fuse sensing data of the traveling sensor and the result of iterative closest point (ICP) matching of the LiDAR sensor to create the odometry information.
  • ICP iterative closest point
  • the controller may include a LiDAR service module configured to receive the sensing data of the LiDAR sensor and to discriminate the amount of location displacement using geometry information based on the sensing data of the LiDAR sensor and previous location information, and a vision service module configured to receive the amount of location displacement from the LiDAR service module, to receive an image from the camera sensor, to discriminate the location of a feature point through matching between a feature point extracted from the current image based on the amount of location displacement and a feature point extracted from the previous location, and to estimate the current location based on the discriminated location of the feature point.
  • a LiDAR service module configured to receive the sensing data of the LiDAR sensor and to discriminate the amount of location displacement using geometry information based on the sensing data of the LiDAR sensor and previous location information
  • a vision service module configured to receive the amount of location displacement from the LiDAR service module, to receive an image from the camera sensor, to discriminate the location of a feature point through matching between a feature point extracted from the current image based on the amount
  • the mobile robot may further include a storage configured to store node information including the calculated current location information and a map including the node information.
  • the vision service module may transmit the node information to the LiDAR service module, and the LiDAR service module may reflect the amount of location displacement that the mobile robot has moved while the vision service module calculates the current location in the node information to discriminate the current location of the mobile robot.
  • the controller may further include a traveling service module configured to read sensing data of the traveling sensor, the traveling service module may transmit the sensing data of the traveling sensor to the LiDAR service module, and the LiDAR service module may fuse odometry information based on the sensing data of the traveling sensor and the ICP result of the LiDAR sensor to create the odometry information.
  • the controller may calculate the current location based on the sensing data of the LiDAR sensor in an area having an illuminance less than a reference value, and may perform loop closing to correct an error when entering an area having an illuminance equal to or greater than the reference value.
  • the controller may perform iterative closest point (ICP) matching between a current node and an adjacent node based on the sensing data of the LiDAR sensor to add a correlation between nodes.
  • ICP iterative closest point
  • the above and other objects can be accomplished by the provision of a method of controlling a mobile robot, the method including acquiring geometry information outside a main body through a LiDAR sensor, acquiring an image of the outside of the main body through a camera sensor, creating odometry information based on sensing data of the LiDAR sensor, performing feature matching between images input from the camera sensor base on the odometry information, and estimating the current location based on the result of the feature matching.
  • the method may further include calculating uncertainty of the estimated current location based on geometry information based on the sensing data of the LiDAR sensor.
  • the method may further include sensing a traveling state based on movement of the main body through a traveling sensor and matching the sensing data of the LiDAR sensor according to an iterative closest point (ICP) algorithm.
  • ICP iterative closest point
  • the creating odometry information may include fusing sensing data of the traveling sensor and a result of iterative closest point (ICP) matching of the LiDAR sensor to create the odometry information.
  • ICP iterative closest point
  • the creating odometry information may include a LiDAR service module of a controller receiving the sensing data of the LiDAR sensor and the LiDAR service module discriminating the amount of location displacement using the geometry information and previous location information.
  • the performing feature matching may include a vision service module of the controller receiving the amount of location displacement from the LiDAR service module, the vision service module receiving an image from the camera sensor, and the vision service module discriminating location of a feature point through matching between a feature point extracted from the current image based on the amount of location displacement and a feature point extracted from the previous location.
  • Node information including the calculated current location information may be stored in a storage, and may be registered on a map.
  • the method may further include the vision service module transmitting the node information to the LiDAR service module, the LiDAR service module calculating the amount of location displacement that the mobile robot has moved while the vision service module calculates the current location, and the LiDAR service module reflecting the calculated amount of location displacement in the node information to discriminate the current location of the mobile robot.
  • the creating odometry information may include the LiDAR service module fusing odometry information based on sensing data of the traveling sensor and an ICP result of the LiDAR sensor to create the odometry information.
  • the traveling service module of the controller may transmit the sensing data of the traveling sensor to the LiDAR service module.
  • the method may further include calculating the current location based on the sensing data of the LiDAR sensor in an area having an illuminance less than a reference value and performing loop closing to correct an error when the main body moves and enters an area having an illuminance equal to or greater than the reference value.
  • the method may further include, in the case in which feature matching between images input from the camera sensor fails, performing iterative closest point (ICP) matching between a current node and an adjacent node based on the sensing data of the LiDAR sensor to add a correlation between nodes.
  • ICP iterative closest point
  • SLAM technology robust to various environmental changes, such as changes in illuminance and object location, by effectively fusing vision-based location recognition technology using a camera and LiDAR-based location recognition technology using a laser.
  • FIG. 1 is a perspective view showing a mobile robot according to an embodiment of the present invention and a charging station for charging the mobile robot;
  • FIG. 2 is a view showing the upper part of the mobile robot shown in FIG. 1;
  • FIG. 3 is a view showing the front part of the mobile robot shown in FIG. 1;
  • FIG. 4 is a view showing the bottom part of the mobile robot shown in FIG. 1;
  • FIG. 5 is a block diagram showing a control relationship between main components of the mobile robot according to the embodiment of the present invention.
  • FIG. 6 is a flowchart showing a method of controlling a mobile robot according to an embodiment of the present invention.
  • FIGS. 7 to 10 are reference views illustrating the control method of FIG. 6;
  • FIG. 11 is a flowchart showing a method of controlling a mobile robot according to another embodiment of the present invention.
  • FIGS. 12 and 13 are flowcharts showing a software process of the method of controlling the mobile robot according to the embodiment of the present invention.
  • FIGS. 14 to 18 are reference views illustrating the method of controlling the mobile robot according to the embodiment of the present invention.
  • FIG. 19 is a reference view illustrating simultaneous localization and mapping (SLAM) according to an embodiment of the present invention.
  • FIG. 20 is a reference view illustrating SLAM according to the embodiment of the present invention.
  • a mobile robot 100 means a robot capable of autonomously moving using wheels or the like, and may be a home helper robot and a robot cleaner.
  • a robot cleaner having a cleaning function which is a kind of mobile robot, will be described by way of example with reference to the drawings; however, the present invention is not limited thereto.
  • FIG. 1 is a perspective view showing a mobile robot according to an embodiment of the present invention and a charging station for charging the mobile robot.
  • FIG. 2 is a view showing the upper part of the mobile robot shown in FIG. 1
  • FIG. 3 is a view showing the front part of the mobile robot shown in FIG. 1
  • FIG. 4 is a view showing the bottom part of the mobile robot shown in FIG. 1.
  • FIG. 5 is a block diagram showing a control relationship between main components of the mobile robot according to the embodiment of the present invention.
  • the mobile robot 100 includes a traveling unit 160 for moving a main body 110.
  • the traveling unit 160 includes at least one driving wheel 136 for moving the main body 110.
  • the traveling unit 160 includes a driving motor (not shown) connected to the driving wheel 136 to rotate the driving wheel.
  • the driving wheels 136 may be provided on left and right sides of the main body 110 which, hereinafter, will be referred to as a left wheel 136(L) and a right wheel 136(R).
  • the left wheel 136(L) and the right wheel 136(R) may be driven by a single driving motor, but, if necessary, may be provided with a left wheel driving motor for driving the left wheel 136(L) and a right wheel driving motor for driving the right wheel 136(R), respectively.
  • the driving direction of the main body 110 may be switched to the left or right side based on a difference in rotational velocity of the left wheel 136(L) and the right wheel 136(R).
  • the mobile robot 100 includes a service unit 150 for providing a predetermined service.
  • a service unit 150 for providing a predetermined service.
  • the service unit 150 may be configured to provide a user with a housework service, such as cleaning (for example, sweeping, suction, or mopping), dish washing, cooking, washing, or refuse disposal.
  • the service unit 150 may perform a security function of sensing trespass, danger, etc.
  • the mobile robot 100 may clean a floor through the service unit 150 while moving in a traveling zone.
  • the service unit 150 may include a suction device for sucking foreign matter, brushes 184 and 185 for performing sweeping, a dust container (not shown) for storing the foreign matter collected by the suction device or the brushes, and/or a mopping unit (not shown) for performing mopping.
  • a suction port 150h for sucking air may be formed in the bottom part of the main body 110.
  • a suction device (not shown) for supplying suction force for sucking air through the suction port 150h and a dust container (not shown) for collecting dust sucked through the suction port 150h together with the air may be provided.
  • the main body 110 may include a case 111 defining a space in which various components constituting the mobile robot 100 are accommodated.
  • the case 111 may have an opening for insertion and removal of the dust container, and a dust container cover 112 for opening and closing the opening may be rotatably provided in the case 111.
  • the main body 110 may be provided with a main brush 154 of a roll type having brushes exposed through the suction port 150h, and an auxiliary brush 155 which is located on the front side of the bottom part of the main body 110 and has a brush formed of a plurality of radially extending wings. Due to the rotation of the brushes 154 and 155, dust is separated from a floor in a traveling zone, and the dust separated from the floor is sucked through the suction port 150h and collected in the dust container.
  • a battery 138 may supply power not only for the driving motor but also for the overall operation of the mobile robot 100.
  • the mobile robot 100 may travel to return to a charging station 200 for charging. During returning, the mobile robot 100 may automatically detect the location of the charging station 200.
  • the charging station 200 may include a signal transmitter (not shown) for transmitting a certain return signal.
  • the return signal may be an ultrasound signal or an infrared signal; however, the present invention is not limited thereto.
  • the mobile robot 100 may include a signal sensor (not shown) for receiving the return signal.
  • the charging station 200 may transmit an infrared signal through the signal transmitter, and the signal sensor may include an infrared sensor for sensing the infrared signal.
  • the mobile robot 100 moves to the location of the charging station 200 according to the infrared signal transmitted from the charging station 200 and docks with the charging station 200. Due to docking, charging may be achieved between a charging terminal 133 of the mobile robot 100 and a charging terminal 210 of the charging station 200.
  • the mobile robot 100 may include a sensing unit 170 for sensing information about the inside/outside of the mobile robot 100.
  • the sensing unit 170 may include one or more sensors 171 and 175 for sensing various kinds of information about a traveling zone and an image acquisition unit 120 for acquiring image information about the traveling zone.
  • the image acquisition unit 120 may be provided separately outside the sensing unit 170.
  • the mobile robot 100 may map the traveling zone based on the information sensed by the sensing unit 170. For example, the mobile robot 100 may perform vision-based location recognition and map creation based on ceiling information of the traveling zone acquired by the image acquisition unit 120. In addition, the mobile robot 100 may perform location recognition and map creation based on a light detection and ranging (LiDAR) sensor 175 using a laser.
  • LiDAR light detection and ranging
  • the mobile robot 100 may effectively fuse location recognition technology based on vision using a camera and location recognition technology based on LiDAR using a laser to perform location recognition and map creation robust to an environmental change, such as illuminance change or article location change.
  • the image acquisition unit 120 which captures an image of the traveling zone, may include one or more camera sensors for acquiring an image of the outside of the main body 110.
  • the image acquisition unit 120 may include a camera module.
  • the camera module may include a digital camera.
  • the digital camera may include at least one optical lens, an image sensor (e.g., a CMOS image sensor) including a plurality of photodiodes (e.g., pixels) for forming an image using light passing through the optical lens, and a digital signal processor (DSP) for forming an image based on a signal output from the photodiodes.
  • the digital signal processor can create a moving image including frames composed of still images as well as a still image.
  • the image acquisition unit 120 may include a front camera sensor 120a configured to acquire an image of the front of the main body and an upper camera sensor 120b provided at the upper part of the main body 110 to acquire an image of a ceiling in the traveling zone.
  • the present invention is not limited as to the location and the capture range of the image acquisition unit 120.
  • the mobile robot 100 may include only the upper camera sensor 120b for acquiring an image of the ceiling in the traveling zone in order to perform vision-based location recognition and traveling.
  • the image acquisition unit 120 of the mobile robot 100 may include a camera sensor (not shown) disposed inclined to one surface of the main body 110 to simultaneously capture front and upper images. That is, it is possible to capture both front and upper images using a single camera sensor.
  • a controller 140 may divide images captured and acquired by the camera into a front image and an upper image based on field of view.
  • the separated front image may be used for vision-based object recognition, like an image acquired by the front camera sensor 120a.
  • the separated upper image may be used for vision-based location recognition and traveling, like an image acquired by the upper camera sensor 120b.
  • the mobile robot 100 may perform vision SLAM of comparing a surrounding image with pre-stored image-based information or comparing acquired images with each other to recognize the current location.
  • the image acquisition unit 120 may include a plurality of front camera sensors 120a and/or a plurality of upper camera sensors 120b.
  • the image acquisition unit 120 may include a plurality of camera sensors (not shown) configured to simultaneously capture front and upper images.
  • a camera may be installed at a portion (for example, the front part, the rear part, or the bottom surface) of the mobile robot to continuously capture images during cleaning.
  • Several cameras may be installed at each portion of the mobile robot to improve capturing efficiency. Images captured by the camera may be used to recognize the kind of a material, such as dust, hair, or a floor, present in a corresponding space, to determine whether cleaning has been performed, or to determine when cleaning has been performed.
  • the front camera sensor 120a may capture an obstacle present in front of the mobile robot 100 in the traveling direction thereof or the state of an area to be cleaned.
  • the image acquisition unit 120 may continuously capture a plurality of images of the surroundings of the main body 110, and the acquired images may be stored in a storage 130.
  • the mobile robot 100 may use a plurality of images in order to improve accuracy in obstacle recognition, or may select one or more from among a plurality of images in order to use effective data, thereby improving accuracy in obstacle recognition.
  • the sensing unit 170 may include a LiDAR sensor 175 for acquiring information about geometry outside the main body 110 using a laser.
  • the LiDAR sensor 175 may output a laser, may provide information about the distance, location, direction, and material of an object that has reflected the laser, and may acquire geometry information of a traveling zone.
  • the mobile robot 100 may obtain 360-degree geometry information using the LiDAR sensor 175.
  • the mobile robot 100 may determine the distance, location, and direction of objects sensed by the LiDAR sensor 175 to create a map.
  • the mobile robot 100 may analyze a laser reception pattern, such as time difference or signal intensity of a laser reflected and received from the outside, to acquire geometry information of the traveling zone.
  • the mobile robot 100 may create a map using the geometry information acquired through the LiDAR sensor 175.
  • the mobile robot 100 may perform LiDAR SLAM of comparing surrounding geometry information acquired at the current location through the LiDAR sensor 175 with pre-stored LiDAR sensor-based geometry information or comparing acquired pieces of geometry information with each other to recognize the current location.
  • the mobile robot 100 may effectively fuse location recognition technology based on vision using a camera and location recognition technology based on LiDAR using a laser to perform location recognition and map creation robust to environmental change, such as illuminance change or article location change.
  • the sensing unit 170 may include sensors 171, 172, and 179 for sensing various data related to the operation and state of the mobile robot.
  • the sensing unit 170 may include an obstacle sensor 171 for sensing a forward obstacle.
  • the sensing unit 170 may include a cliff sensor 172 for sensing a cliff on the floor in the traveling zone and a lower camera sensor 179 for acquiring a bottom image.
  • the obstacle sensor 171 may include a plurality of sensors installed at the outer circumferential surface of the mobile robot 100 at predetermined intervals.
  • the obstacle sensor 171 may include an infrared sensor, an ultrasonic sensor, an RF sensor, a geomagnetic sensor, and a position sensitive device (PSD) sensor.
  • an infrared sensor an ultrasonic sensor
  • an RF sensor an RF sensor
  • a geomagnetic sensor a geomagnetic sensor
  • PSD position sensitive device
  • the location and kind of the sensors included in the obstacle sensor 171 may be changed depending on the type of the mobile robot, and the obstacle sensor 171 may include a wider variety of sensors.
  • the obstacle sensor 171 is a sensor for sensing the distance to a wall or an obstacle in a room; however, the present invention is not limited as to the kind thereof.
  • an ultrasonic sensor will be described by way of example.
  • the obstacle sensor 171 senses an object, specifically an obstacle, present in the traveling (moving) direction of the mobile robot, and transmits obstacle information to the controller 140. That is, the obstacle sensor 171 may sense the movement path of the mobile robot, a protrusion present ahead of the mobile robot or beside the mobile robot, or fixtures, furniture, wall surfaces, or wall corners in a house, and may transmit information thereabout to the controller.
  • the controller 140 may sense location of the obstacle based on at least two signals received through the ultrasonic sensor, and may control motion of the mobile robot 100 based on the sensed location of the obstacle.
  • the obstacle sensor 171 which is provided at the outer surface of the case 111, may include a transmitter and a receiver.
  • the ultrasonic sensor may include at least one transmitter and at least two receivers, which cross each other. Consequently, it is possible to transmit signals at various angles and to receive the signals reflected by the obstacle at various angles.
  • the signal received from the obstacle sensor 171 may pass through a signal processing process, such as amplification and filtering, and then the distance and direction to the obstacle may be calculated.
  • a signal processing process such as amplification and filtering
  • the sensing unit 170 may further include a traveling sensor for sensing the traveling state of the mobile robot 100 based on driving of the main body 110 and outputting operation information.
  • a traveling sensor for sensing the traveling state of the mobile robot 100 based on driving of the main body 110 and outputting operation information.
  • a gyro sensor, a wheel sensor, or an acceleration sensor may be used as the traveling sensor.
  • Data sensed by at least one of the traveling sensors or data calculated based on data sensed by at least one of the traveling sensors may constitute odometry information.
  • the gyro sensor senses the rotational direction of the mobile robot 100 and detects the rotational angle of the mobile robot 100 when the mobile robot 100 moves in an operation mode.
  • the gyro sensor detects the angular velocity of the mobile robot 100, and output a voltage value proportional to the angular velocity.
  • the controller 140 calculates the rotational direction and the rotational angle of the mobile robot 100 using the voltage value output from the gyro sensor.
  • the wheel sensor is connected to each of the left wheel 136(L) and the right wheel 136(R) to sense the number of rotations of the wheels.
  • the wheel sensor may be an encoder.
  • the encoder senses and outputs the number of rotations of each of the left wheel 136(L) and the right wheel 136(R).
  • the controller 140 may calculate the rotational velocity of each of the left and right wheels using the number of rotations thereof. In addition, the controller 140 may calculate the rotational angle of each of the left wheel 136(L) and the right wheel 136(R) using the difference in the number of rotations therebetween.
  • the acceleration sensor senses a change in velocity of the mobile robot, for example, a change of the mobile robot 100 based on departure, stop, direction change, or collision with an object.
  • the acceleration sensor may be attached to a position adjacent to a main wheel or an auxiliary wheel to detect slip or idling of the wheel.
  • the acceleration sensor may be mounted in the controller 140 to sense a change in velocity of the mobile robot 100. That is, the acceleration sensor detects an impulse depending on a change in velocity of the mobile robot 100, and outputs a voltage value corresponding thereto. Consequently, the acceleration sensor may perform the function of an electronic bumper.
  • the controller 140 may calculate a change in location of the mobile robot 100 based on the operation information output from the traveling sensor.
  • the location is a location relative to an absolute location using image information.
  • the mobile robot may improve the performance of location recognition using image information and obstacle information through the relative location recognition.
  • the mobile robot 100 may include a power supply (not shown) having a rechargeable battery 138 to supply power to the robot cleaner.
  • the power supply may supply driving power and operating power to the respective components of the mobile robot 100, and may be charged with charge current from the charging station 200 in the case in which the remaining quantity of the battery is insufficient.
  • the mobile robot 100 may further include a battery sensor (not shown) for sensing the charged state of the battery 138 and transmitting the result of sensing to the controller 140.
  • the battery 138 is connected to the battery sensor, and the remaining quantity and charged state of the battery are transmitted to the controller 140.
  • the remaining quantity of the battery may be displayed on the screen of an output unit (not shown).
  • the mobile robot 100 includes a manipulator 137 for allowing an ON/OFF command or various commands to be input. Various control commands necessary for overall operation of the mobile robot 100 may be input through the manipulator 137.
  • the mobile robot 100 may include an output unit (not shown), and may display schedule information, a battery state, an operation mode, an operation state, or an error state through the output unit.
  • the mobile robot 100 includes a controller 140 for processing and determining various kinds of information, for example, recognizing current location thereof, and a storage 130 for storing various kinds of data.
  • the mobile robot 100 may further include a communication unit 190 for transmitting and receiving data to and from other devices.
  • An external terminal which is one of the devices that communicate with the mobile robot 100, may have an application for controlling the mobile robot 100, may display a map of a traveling zone to be cleaned by the mobile robot 100 through execution of the application, and may designate a specific area to be cleaned on the map.
  • Examples of the external terminal may include a remote controller equipped with an application for map setting, a PDA, a laptop computer, a smartphone, or a tablet computer.
  • the external terminal may communicate with the mobile robot 100 to display current location of the mobile robot together with the map, and display information about a plurality of areas. In addition, the external terminal displays updated location of the mobile robot depending on traveling thereof.
  • the controller 140 controls the sensing unit 170, the manipulator 137, and the traveling unit 160, which constitutes the mobile robot 100, thereby controlling overall operation of the mobile robot 100.
  • the storage 130 stores various kinds of information necessary for controlling the mobile robot 100, and may include a volatile or non-volatile recording medium.
  • the storage medium may store data that can be read by a microprocessor.
  • the present invention is not limited as to the kind or implementation scheme thereof.
  • the storage 130 may store a map of the traveling zone.
  • the map may be input by an external terminal or a server capable of exchanging information with the mobile robot 100 through wired or wireless communication, or may be created by the mobile robot 100 through self-learning.
  • Locations of rooms in the traveling zone may be displayed on the map.
  • current location of the mobile robot 100 may be displayed on the map, and the current location of the mobile robot 100 on the map may be updated during traveling.
  • the external terminal stores a map identical to the map stored in the storage 130.
  • the storage 130 may store cleaning history information.
  • the cleaning history information may be created whenever cleaning is performed.
  • the map about the traveling zone stored in the storage 130 may be a navigation map used for traveling during cleaning, a simultaneous localization and mapping (SLAM) map used for location recognition, a learning map using information stored and learned when the mobile robot collides with an obstacle, etc. at the time of cleaning, a global pose map used for global pose recognition, or an obstacle recognition map having information about recognized obstacles recorded therein.
  • SLAM simultaneous localization and mapping
  • the maps may not be clearly classified by purpose, although the maps may be partitioned by purpose, stored in the storage 130, and managed, as described above. For example, a plurality of pieces of information may be stored in a single map so as to be used for at least two purposes.
  • the controller 140 may include a traveling control module 141, a location recognition module 142, a map creation module 143, and an obstacle recognition module 144.
  • the traveling control module 141 controls traveling of the mobile robot 100, and controls driving of the traveling unit 160 depending on traveling setting.
  • the traveling control module 141 may determine the traveling path of the mobile robot 100 based on the operation of the traveling unit 160.
  • the traveling control module 141 may determine the current or past movement velocity, the traveling distance, etc. of the mobile robot 100 based on the rotational velocity of the driving wheel 136, and may also determine the current or past direction change of the mobile robot 100 based on the rotational direction of each of the wheels 136(L) and 136(R).
  • the location of the mobile robot 100 on the map may be updated based on the determined traveling information of the mobile robot 100.
  • the map creation module 143 may create a map of a traveling zone.
  • the map creation module 143 may process the image acquired through the image acquisition unit 120 to prepare a map.
  • the map creation module may prepare a map corresponding to a traveling zone and a cleaning map corresponding to a cleaning area.
  • the map creation module 143 may process an image acquired through the image acquisition unit 120 at each location and may connect the same to the map to recognize a global pose.
  • the map creation module 143 may prepare a map based on information acquired through the LiDAR sensor 175, and may recognize the location of the mobile robot based on information acquired through the LiDAR sensor 175 at each location.
  • the map creation module 143 may prepare a map based on information acquired through the image acquisition unit 120 and the LiDAR sensor 175, and may perform location recognition.
  • the location recognition module 142 estimates and recognizes the current location of the mobile robot.
  • the location recognition module 142 may determine the location of the mobile robot in connection with the map creation module 143 using image information of the image acquisition unit 120, and may thus estimate and recognize the current location of the mobile robot even in the case in which the location of the mobile robot 100 is abruptly changed.
  • the mobile robot 100 may perform location recognition through the location recognition module 142 during continuous traveling, and may learn a map and may estimate the current location thereof through the traveling control module 141, the map creation module 143, and the obstacle recognition module 144 without the location recognition module 142.
  • the image acquisition unit 120 acquires images of the surroundings of the mobile robot 100.
  • an image acquired by the image acquisition unit 120 will be defined as an "acquisition image.”
  • An acquisition image includes various features, such as lighting located at the ceiling, an edge, a corner, a blob, and a ridge.
  • the map creation module 143 detects features from each acquisition image.
  • Various feature detection methods of extracting feature points from an image are well known in the field of computer vision.
  • Various feature detectors suitable for extracting these feature points are known. For example, there are Canny, Sobel, Harris & Stephens/Plessey, SUSAN, Shi & Tomasi, Level curve curvature, FAST, Laplacian of Gaussian, Difference of Gaussians, Determinant of Hessian, MSER, PCBR, and Gray-level blobs detectors.
  • the map creation module 143 calculates a descriptor based on each feature point. For feature detection, the map creation module 143 may convert a feature point into a descriptor using a scale invariant feature transform (SIFT) method. The descriptor may be expressed as an n-dimensional vector.
  • SIFT scale invariant feature transform
  • SIFT may detect invariant features with respect to the scale, rotation, and brightness change of an object to be captured, and thus may detect invariant features (i.e. a rotation-invariant feature) even when the same area is captured while the pose of the mobile robot 100 is changed.
  • invariant features i.e. a rotation-invariant feature
  • HOG Histogram of Oriented Gradients
  • Haar feature Haar feature
  • Fems Fems
  • LBP Local Binary Pattern
  • MCT Modified Census Transform
  • the map creation module 143 may classify at least one descriptor for each acquisition image into a plurality of groups according to a predetermined sub-classification rule based on descriptor information obtained through an acquisition image of each location, and may convert descriptors included in the same group into sub-representation descriptors according to a predetermined sub-representation rule.
  • the map creation module may classify all descriptors collected from acquisition images in a predetermined zone, such as a room, into a plurality of groups according to the predetermined sub-classification rule, and may convert descriptors included in the same group into sub-representation descriptors according to the predetermined sub-representation rule.
  • the map creation module 143 may calculate feature distribution of each location through the above process.
  • the feature distribution of each location may be expressed as a histogram or an n-dimensional vector.
  • the map creation module 143 may estimate an unknown current location of the mobile robot based on the descriptor calculated from each feature point, not according to the predetermined sub-classification rule and the predetermined sub-representation rule.
  • the current location of the mobile robot 100 may be estimated based on data, such as pre-stored descriptors or sub-representation descriptors.
  • the mobile robot 100 acquires an acquisition image through the image acquisition unit 120 at the unknown current location.
  • Various features such as lighting located at the ceiling, an edge, a corner, a blob, and a ridge, are identified through the image.
  • the location recognition module 142 detects features from the acquisition image.
  • Various methods of detecting features from an image in the field of computer vision are well known and various feature detectors suitable for feature detection have been described above.
  • the location recognition module 142 calculates a recognition descriptor through a recognition descriptor calculation step based on each recognition feature point.
  • the recognition feature point and the recognition descriptor are provided to describe a process performed by the location recognition module 142, and are provided to be distinguished from terms that describe a process performed by the map creation module 143. That is, the features outside the mobile robot 100 may be defined by different terms.
  • the location recognition module 142 may convert a recognition feature point into a recognition descriptor using the scale invariant feature transform (SIFT) method.
  • SIFT scale invariant feature transform
  • the recognition descriptor may be expressed as an n-dimensional vector.
  • SIFT is an image recognition method of selecting a feature point that can be easily identified, such as a corner point, from an acquisition image and calculating an n-dimensional vector having the abrupt degree of change for each direction as a numerical value for each dimension with respect to distribution characteristics of the brightness gradient of pixels belonging to a predetermined zone around each feature point (the direction in which brightness is changed and the abrupt degree of change).
  • the location recognition module 142 performs conversion into information (sub-recognition feature distribution) comparable with location information that becomes a comparison target (for example, feature distribution of each location) according to a predetermined sub-conversion rule based on information about at least one recognition descriptor obtained through the acquisition image of the unknown current location.
  • the feature distribution of each location may be compared with the feature distribution of each recognition according to a predetermined sub-comparison rule to calculate similarity therebetween. Similarity (probability) by location corresponding to each location may be calculated, and the location having the greatest calculated probability may be determined to be the current location of the mobile robot.
  • the controller 140 may divide a traveling zone to create a map including a plurality of areas, or may recognize the current location of the main body 110 based on a pre-stored map.
  • controller 140 may fuse information acquired through the image acquisition unit 120 and the LiDAR sensor 175 to prepare a map, and may perform location recognition.
  • the controller 140 may transmit the created map to the external terminal or the server through the communication unit 190.
  • the controller 140 may store the map in the storage, as described above.
  • the controller 140 may transmit updated information to the external terminal such that the external terminal and the mobile robot 100 have the same map.
  • the mobile robot 100 may clean a designated area according to a cleaning command from the external terminal, and the current location of the mobile robot may be displayed on the external terminal.
  • the cleaning area on the map may be divided into a plurality of areas, and the map may include a connection path for interconnecting the areas and information about obstacles in the areas.
  • the controller 140 determines whether the location on the map and the current location of the mobile robot coincide with each other.
  • the cleaning command may be input from the remote controller, the manipulator, or the external terminal.
  • the controller 140 may recognize the current location to restore the current location of the mobile robot 100, and may control the traveling unit 160 to move to a designated area based on the current location.
  • the location recognition module 142 may analyze the acquisition image input from the image acquisition unit 120 and/or the geometry information acquired through the LiDAR sensor 175 to estimate the current location based on the map.
  • the obstacle recognition module 144 and the map creation module 143 may also recognize the current location in the same manner.
  • the traveling control module 141 calculates a traveling path from the current location to the designated area, and controls the traveling unit 160 to move to the designated area.
  • the traveling control module 141 may divide the entire traveling zone into a plurality of areas according to the received cleaning pattern information, and may set at least one area to a designated area.
  • the traveling control module 141 may calculate a traveling path according to the received cleaning pattern information, and may perform cleaning while traveling along the traveling path.
  • the controller 140 may store a cleaning record in the storage 130.
  • controller 140 may periodically transmit the operation state or the cleaning state of the mobile robot 100 to the external terminal or the server through the communication unit 190.
  • the external terminal displays the location of the mobile robot with the map on the screen of an application that is being executed based on received data, and outputs information about the cleaning state.
  • the mobile robot 100 moves in one direction until an obstacle or a wall is sensed, and when the obstacle recognition module 144 recognizes the obstacle, the mobile robot may decide a traveling pattern, such as straight movement or turning, based on the attributes of the recognized obstacle.
  • the mobile robot 100 may continuously move straight.
  • the mobile robot 100 may turn, move a predetermined distance, and move to a distance from which the obstacle can be sensed in the direction opposite to the initial movement direction, i.e. may travel in a zigzag fashion.
  • the mobile robot 100 may perform human and object recognition and evasion based on machine learning.
  • the controller 140 may include an obstacle recognition module 144 for recognizing an obstacle pre-learned based on machine learning in an input image and a traveling control module 141 for controlling driving of the traveling unit 160 based on the attributes of the recognized obstacle.
  • the mobile robot 100 may include an obstacle recognition module 144 that has learned the attributes of an obstacle based on machine learning.
  • Machine learning means that computers learn through data without humans directly instructing logic to the computers and solve a problem based on learning.
  • Deep learning is artificial intelligence technology in which computers can learn for themselves, like humans, based on an artificial neural network (ANN) for constituting artificial intelligence without the humans teaching the computers using a method of teaching humans' way of thinking.
  • ANN artificial neural network
  • the artificial neural network may be realized in the form of software or the form of hardware, such as a chip.
  • the obstacle recognition module 144 may include a software- or hardware-type artificial neural network (ANN) that has learned the attributes of an obstacle.
  • ANN artificial neural network
  • the obstacle recognition module 144 may include a deep neural network (DNN) that has been trained based on deep learning, such as a convolutional neural network (CNN), a recurrent neural network (RNN), or a deep belief network (DBN).
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • DNN deep belief network
  • the obstacle recognition module 144 may discriminate the attributes of an obstacle included in input image data based on weights between nodes included in the deep neural network (DNN).
  • DNN deep neural network
  • the controller 140 may discriminate the attributes of an obstacle present in a moving direction using only a portion of an image acquired by the image acquisition unit 120, especially the front camera sensor 120a, not using the entirety of the image.
  • the traveling control module 141 may control driving of the traveling unit 160 based on the attributes of the recognized obstacle.
  • the storage 130 may store input data for discriminating the attributes of an obstacle and data for training the deep neural network (DNN).
  • DNN deep neural network
  • the storage 130 may store the original image acquired by the image acquisition unit 120 and extracted images of predetermined areas.
  • the storage 130 may store weights and biases constituting the structure of the deep neural network (DNN).
  • DNN deep neural network
  • the weights and biases constituting the structure of the deep neural network may be stored in an embedded memory of the obstacle recognition module 144.
  • the obstacle recognition module 144 may perform a learning process using the extracted image as training data, or after a predetermined number or more of extracted images are acquired, the obstacle recognition module may perform the learning process.
  • the obstacle recognition module 144 may add the result of recognition to update the structure of the deep neural network (DNN), such as weights, or after a predetermined number of training data are secured, the obstacle recognition module may perform the learning process using the secured training data to update the structure of the deep neural network (DNN), such as weights.
  • DNN deep neural network
  • the obstacle recognition module may perform the learning process using the secured training data to update the structure of the deep neural network (DNN), such as weights.
  • the mobile robot 100 may transmit the original image acquired by the image acquisition unit 120 or extracted images to a predetermined server through the communication unit 190, and may receive data related to machine learning from the predetermined server.
  • the mobile robot 100 may update the obstacle recognition module 144 based on the data related to machine learning received from the predetermined server.
  • the mobile robot 100 may further include an output unit 180 for visibly displaying or audibly outputting predetermined information.
  • the output unit 180 may include a display (not shown) for visibly displaying information corresponding to user command input, the result of processing corresponding to the user command input, an operation mode, an operation state, an error state, etc.
  • the display may be connected to a touchpad in a layered structure so as to constitute a touchscreen.
  • the display constituting the touchscreen may also be used as an input device for allowing a user to input information by touch, in addition to an output device.
  • the output unit 180 may further include a sound output unit (not shown) for outputting an audio signal.
  • the sound output unit may output an alarm sound, a notification message about an operation mode, an operation state, and an error state, information corresponding to user command input, and the processing result corresponding to the user command input in the form of sound under control of the controller 140.
  • the sound output unit may convert an electrical signal from the controller 140 into an audio signal, and may output the audio signal.
  • a speaker may be provided.
  • FIG. 6 is a flowchart showing a method of controlling a mobile robot according to an embodiment of the present invention, which is a flowchart showing a map creation process, and FIGS. 7 to 10 are reference views illustrating the control method of FIG. 6.
  • FIGS. 7 and 8 are conceptual views illustrating a traveling and information acquisition process (S601), a node creation process (S602), a node map creation process (S603), a border creation process (S604), a border map creation process (S605), and a descriptor creation process (S606) of FIG. 6.
  • FIG. 7 shows an image acquired in process S601 and a plurality of feature points f1, f2, f3, f4, f5, f6, and f7 in the image, and shows a diagram of creating descriptors _, which are n-dimensional vectors corresponding to the feature points f1, f2, f3,..., f7 respectively, in process S606.
  • the image acquisition unit 120 acquires an image at each point during traveling of the mobile robot 100.
  • the image acquisition unit 120 may perform capturing toward the upper side of the mobile robot 100 to acquire an image of a ceiling, etc.
  • a traveling obstacle factor may be sensed using the sensing unit 170, the image acquisition unit 120, or other well-known means during traveling of the mobile robot 100.
  • the mobile robot 100 may sense a traveling obstacle factor at each point.
  • the mobile robot may sense the outer surface of a wall, which is one of the traveling obstacle factors, at a specific point.
  • the mobile robot 100 creates a node corresponding to each point. Coordinate information corresponding to a node Na18, Na19, or Na20 may be created based on the traveling displacement measured by the mobile robot 100.
  • a node may mean data indicating any one location on a map corresponding to a predetermined point in a traveling zone, and, in graph-based SLAM, a node may mean the pose of a robot.
  • the pose may include location coordinate information (X, Y) and direction information _ in a coordinate system.
  • Node information may mean various data corresponding to the node.
  • a map may include a plurality of nodes and node information corresponding thereto.
  • Traveling displacement is a concept including the moving direction and the moving distance of the mobile robot. Assuming that the floor surface in the traveling zone is in a plane in which X and Y axes are orthogonal, the traveling displacement may be expressed as (_). _ may represent displacement in X-axis and Y-axis directions, and _ may represent a rotational angle.
  • the controller 140 may measure the traveling displacement of the mobile robot 100 based on the operation of the traveling unit 160.
  • the traveling control module 141 may measure the current or past movement velocity, the traveling distance, etc. of the mobile robot 100 based on the rotational speed of the driving wheel 136, and may also measure the current or past direction change process based on the rotational direction of the driving wheel 136.
  • the controller 140 may measure the traveling displacement using data sensed by the sensing unit 170.
  • the traveling displacement may be measured using a wheel sensor connected to each of the left wheel 136(L) and the right wheel 136(R) to sense the number of rotations of the wheels, such as an encoder.
  • the controller 140 may calculate the rotational velocity of each of the left and right wheels using the number of rotations thereof. In addition, the controller 140 may calculate the rotational angle of each of the left wheel 136(L) and the right wheel 136(R) using the difference in the number of rotations therebetween.
  • an encoder has a limitation in that errors are accumulated as integration is continuously performed. More preferably, therefore, the controller 140 may create odometry information, such as traveling displacement, based on sensing data of the LiDAR sensor 175.
  • the controller 140 may fuse sensing data sensed by the wheel sensor and sensing data of the LiDAR sensor 175 to create more accurate odometry information.
  • the controller may fuse sensing data of the traveling sensor and the result of iterative closest point (ICP) matching of the LiDAR sensor 175 to create odometry information.
  • ICP iterative closest point
  • the mobile robot 100 creates border information b20 corresponding to a traveling obstacle factor.
  • the mobile robot 100 may create border information corresponding to each traveling obstacle factor.
  • a plurality of traveling obstacle factors may achieve one-to-one correspondence to a plurality of pieces of border information.
  • the border information b20 may be created based on coordinate information of a corresponding node and a distance value measured by the sensing unit 170.
  • the node map creation process (S603) and the border map creation process (S605) are performed simultaneously.
  • a node map including a plurality of nodes Na18, Na19, Na20, and the like is created.
  • a border map Ba including a plurality of pieces of border information b20 and the like is created.
  • a map Ma including the node map and the border map Ba is created in the node map creation process (S603) and the border map creation process (S605).
  • FIG. 6 shows a map Ma being created through the node map creation process (S603) and the border map creation process (S605).
  • various feature points such as lighting located in the ceiling, an edge, a corner, a blob, and a ridge
  • the mobile robot 100 extracts feature points from an image.
  • Various feature detection methods of extracting feature points from an image are well known in the field of computer vision.
  • Various feature detectors suitable for extracting these feature points are known. For example, there are Canny, Sobel, Harris & Stephens/Plessey, SUSAN, Shi & Tomasi, Level curve curvature, FAST, Laplacian of Gaussian, Difference of Gaussians, Determinant of Hessian, MSER, PCBR, and Gray-level blobs detector.
  • descriptors _ are created based on a plurality of feature points f1, f2, f3,..., f7 extracted from the acquired image.
  • descriptors _ are created based on a plurality of feature points f1, f2, f3,..., fm extracted from a plurality of acquired images (where m is a natural number).
  • a plurality of feature points f1, f2, f3,..., fm achieves one-to-one correspondence to a plurality of descriptors _.
  • f1(1), f1(2), f1(3),..., f1(n) in curly brackets ⁇ of _ mean the numerical values of each dimension forming _. Since the notation for the rest _ has the same method, a description thereof will be omitted.
  • a plurality of descriptors _ corresponding to a plurality of feature points f1, f2, f3,..., fm may be created, by using scale invariant feature transform (SIFT) technology for feature detection.
  • SIFT scale invariant feature transform
  • a descriptor that is an n-dimensional vector based on the distribution characteristics (the direction in which brightness is changed and the abrupt degree of change) of a brightness gradient of pixels belonging to a certain area around each feature point f1, f2, f3, f4, f5, f6, or f7.
  • the direction of each brightness change of the feature point may be regarded as each dimension, and it is possible to create an n-dimensional vector (descriptor) in which the abrupt degree of change in the direction of each brightness change is a numerical value for each dimension.
  • SIFT may detect invariant features with respect to the scale, rotation, and brightness change of an object to be captured, and thus may detect invariant features (i.e. a rotation-invariant feature) even when the same area is captured while the pose of the mobile robot 100 is changed.
  • invariant features i.e. a rotation-invariant feature
  • HOG Histogram of Oriented Gradients
  • Haar feature Haar feature
  • Fems Fems
  • LBP Local Binary Pattern
  • MCT Modified Census Transform
  • FIG. 9 is a conceptual view showing a plurality of nodes N created by the mobile robot during movement and displacement C between the nodes.
  • traveling displacement C1 is measured while the origin node O is set, and information of a node N1 is created.
  • Traveling displacement C2 that is measured afterwards may be added to coordinate information of the node N1 which is the starting point of the traveling displacement C2 in order to create coordinate information of a node N2 which is the end point of the traveling displacement C2.
  • Traveling displacement C3 is measured in the state in which the information of the node N2 is created, and information of a node N3 is created.
  • Information of nodes N1, N2, N3,..., N16 is sequentially created based on traveling displacements C1, C2, C3,..., C16 that are sequentially measured as described above.
  • loop displacement means a measured value of displacement between any one node N15 and another adjacent node N5 which is not the 'base node N14' of the node N15.
  • acquisition image information corresponding to any one node N15 and acquisition image information corresponding to the other adjacent node N5 may be compared with each other such that the loop displacement (LC) between two nodes N15 and N5 can be measured.
  • the distance information between any one node N15 and the surrounding environment thereof may be compared with the distance information between the other adjacent node N5 and the surrounding environment thereof such that the loop displacement (LC) between the two nodes N15 and N5 can be measured.
  • FIG. 8 illustrates loop displacement LC1 measured between the node N5 and the node N15, and loop displacement LC2 measured between the node N4 and the node N16.
  • Information of any one node N5 created based on the traveling displacement may include node coordinate information and image information corresponding to the node.
  • image information corresponding to the node N15 may be compared with the image information corresponding to the node N5 to measure the loop displacement LC1 between the two nodes N5 and N15.
  • the 'loop displacement LC1' and the 'displacement calculated according to the previously stored coordinate information of the two nodes N5 and N15' are different from each other, it is possible to update the coordinate information of the two nodes N5 and N15 by considering that there is an error in the node coordinate information.
  • coordinate information of the other nodes N6, N7, N8, N9, N10, N11, N12, N13, and N14 connected to the two nodes N5 and N15 may also be updated.
  • the node coordinate information, which is updated once, may be continuously updated through the above process.
  • the node coordinate information In the case of updating the node coordinate information, only the node coordinate information of the first loop node and the second loop node may be updated. However, since the error occurs by accumulating the errors of the traveling displacements, it is possible to disperse the error and to set the node coordinate information of other nodes to be updated. For example, the node coordinate information may be updated by distributing the error values to all the nodes created by the traveling displacement between the first loop node and the second loop node. Referring to FIG.
  • the error when the loop displacement LC1 is measured and the error is calculated, the error may be dispersed to the nodes N6 to N14 between the first loop node N15 and the second loop node N5 such that all the node coordinate information of the nodes N5 to N15 may be updated little by little.
  • the node coordinate information of the other nodes N1 to N4 it is also possible to update the node coordinate information of the other nodes N1 to N4 by expanding the error dispersion.
  • FIG. 10 is a conceptual view showing an example of the first map Ma, and is a view including a created node map.
  • FIG. 10 shows an example of any one map Ma created through the map creation step of FIG. 6.
  • the map Ma may include a node map and a border map Ba.
  • the node map may include a plurality of first nodes Na1 to Na99.
  • any one map Ma may include node maps Na1, Na2, ..., Na99 and a border map Ba.
  • a node map refers to information consisting of a plurality of nodes among various kinds of information in a single map
  • a border map refers to information consisting of a plurality of pieces of border information among various kinds of information in a single map.
  • the node map and the border map are elements of the map, and the processes of creating the node map (S602 and S603) and the processes of creating the border map (S604 and S605) are performed simultaneously.
  • border information may be created based on the pre-stored coordinate information of a node corresponding to a specific point, after measuring the distance between the traveling obstacle factor and the specific point.
  • the node coordinate information of the node may be created based on the pre-stored border information corresponding to a specific obstacle factor, after measuring the distance of the specific point away from the specific obstacle.
  • the node and border information one may be created on the map based on the relative coordinates of one with respect to the other stored previously.
  • the map may include image information created in process S606.
  • a plurality of nodes achieves one-to-one correspondence to a plurality of image information.
  • Specific image information corresponds to a specific node.
  • FIG. 11 is a flowchart showing a method of controlling a mobile robot according to another embodiment of the present invention.
  • the mobile robot 100 may include the LiDAR sensor 175 for acquiring geometry information of the outside of the main body 110 and the camera sensor 120b for acquiring an image of the outside of the main body 110.
  • the mobile robot 100 may acquire geometry information of a traveling zone through the LiDAR sensor 175 during operation thereof (S1110).
  • the mobile robot 100 may acquire image information of the traveling zone through the image acquisition unit 120, such as the camera sensor 120b, during operation thereof (S1120).
  • the controller 140 may create odometry information based on sensing data of the LiDAR sensor 175 (S1130).
  • the controller 140 may compare surrounding geometry information based on sensing data sensed at a specific location through the LiDAR sensor 175 with pre-stored geometry information based on the LiDAR sensor to create odometry information, such as traveling displacement.
  • the controller 140 may fuse sensing data of the traveling sensor, such as the wheel sensor, which senses the traveling state based on the movement of the main body 110 and sensing data of the LiDAR sensor 175 to create more accurate odometry information.
  • the traveling sensor such as the wheel sensor
  • An encoder connected to each of the left wheel 136(L) and the right wheel 136(R) to sense and output the number of rotations of the wheels may be used as the wheel sensor.
  • the encoder has a limitation in that errors are accumulated as integration is continuously performed.
  • the controller may fuse sensing data of the traveling sensor and the result of the iterative closest point (ICP) matching of the LiDAR sensor to create odometry information.
  • ICP iterative closest point
  • sensing data of the LiDAR sensor 175 may be matched according to an iterative closest point (ICP) algorithm, and, in the step of creating the odometry information (S1130), the control 140 may fuse sensing data of the traveling sensor and the result of iterative closest point (ICP) matching of the LiDAR sensor 175 to create the odometry information.
  • ICP iterative closest point
  • the controller 140 may detect two points having the closest distance between pieces of information acquired through the LiDAR sensor 175 at different point in time. The controller may set the two detected points to corresponding points.
  • the controller 140 may detect odometry information related to traveling displacement of the mobile robot 100 using momentum that makes locations of the set corresponding points equal to each other.
  • the controller 140 may detect location information related to the current point of the mobile robot 100 using location information related to the point at which movement starts (the previous location) and the detected traveling displacement.
  • ICP iterative closest point
  • matching to a location at which the distance between points is the closest between data acquired by the LiDAR sensor 175 and pre-stored data may be achieved as the result of matching of data according to the ICP algorithm. Consequently, the location of the mobile robot 100 may be estimated. In addition, odometry information may be created based on the previous location.
  • odometry information based on sensing data of the LiDAR sensor 175 may be created using another algorithm.
  • the controller 140 may perform matching of feature points between images input from the camera sensor 120b based on the odometry information (S1140), and may estimate the current location base on the result of matching of the feature points (S1150).
  • the controller 140 detects various features, such as lighting located at the ceiling, an edge, a corner, a blob, and a ridge, from the image input from the camera sensor 120b.
  • the controller 140 calculates a recognition descriptor through the recognition descriptor calculation step based on each recognition feature point, and performs conversion into information (sub-recognition feature distribution) comparable with location information that becomes a comparison target (for example, feature distribution of each location) according to the predetermined sub-conversion rule based on information about at least one recognition descriptor.
  • the controller 140 may match feature points between images input from the camera sensor 120b, or may match feature points extracted from an image input from the camera sensor 120b with feature points of image information registered on the map.
  • the feature distribution of each location may be compared with the feature distribution of each recognition according to the predetermined sub-comparison rule to calculate similarity therebetween. Similarity (probability) by location corresponding to each location may be calculated, and the location having the greatest calculated probability may be determined to be the current location of the mobile robot.
  • controller 140 may register a node corresponding to the discriminated current location on the map (S1160).
  • the controller 140 may check a node within a predetermined reference distance based on a node corresponding to the current location on a node map to determine whether to register the node.
  • the node map may be a map including a plurality of nodes indicating the location of the robot calculated using sensing information, and may be a SLAM map.
  • the controller 140 may be configured to register a node only in the case in which additional meaningful information on the map is necessary.
  • the controller 140 may discriminate whether an edge (constraint) is present between all nodes within a predetermined distance based on the current location and the current node. This may be determination as to whether feature points of the current node and an adjacent node are matched with each other. For example, in the case in which a corner point is present as the feature point, the corner point may be compared with the previous corner point, and whether relative coordinates of the robot are present may be determined, whereby it is possible to determine whether correlation is present.
  • an edge joining nodes may be traveling displacement information between locations of the robot, odometry information, or constraint.
  • To create and add a correlation between nodes is to create an edge joining nodes.
  • creation of a correlation between nodes may mean calculation of a relative location between two nodes and an error value of the relative location.
  • an edge is relative coordinates between a node and the robot, and may indicate a relationship between nodes.
  • that an edge (constraint) is present may mean that partially overlapping sensing information is present between nodes.
  • the controller 140 may compare a candidate node within a predetermined distance based on the current node corresponding to the current location of the robot with the current node to check whether an edge is present.
  • edge In the case in which the edge is present, this may mean that a common feature is present between nodes and that feature matching is also possible. Subsequently, an edge connected to the edge corresponding to the current location is compared with node information on the existing map.
  • the controller 140 may check the node information on the existing map, and, in the case in which all edges connected to the node corresponding to the current location are consistent, may not register the node on the map, and may finish the process of determine whether to register the node on the map.
  • the controller 140 may perform iterative closest point (ICP) matching between the current node and an adjacent node based on sensing data of the LiDAR sensor 175 to add a correlation between nodes.
  • ICP iterative closest point
  • controller 140 may use sensing data of the LiDAR sensor 175 for discrimination and creation of a correlation between nodes on the map creation and update process irrespective of whether feature matching between images is successful.
  • the controller 140 may create the map, and may recognize the current location of the mobile robot 100 based on the pre-stored map.
  • the present invention is technology capable of securing high location recognition performance in various environments, and realizes a location recognition algorithm using different kinds of sensors having different physical properties.
  • SLAM technology may be divided into vision-based SLAM and laser-based SLAM.
  • SLAM vision-based SLAM
  • a feature point is extracted from an image, three-dimensional coordinates are calculated through matching, and SLAM is performed based thereon.
  • excellent performance is exhibited in self-location recognition.
  • operation is difficult, and there is a scale drift problem in which a small object present nearby and a large object present far away are recognized similarly.
  • the distance by angle is measured using a laser to calculate geometry in the surrounding environment.
  • the laser-based SLAM works even in a dark environment. Since location is recognized using only geometry information, however, it may be difficult to find the own location thereof in the case in which there is no initial location condition in a space having a lot of repetitive areas, such as an office environment. In addition, it is difficult to correspond to a dynamic environment, such as movement of furniture.
  • vision-based SLAM accurate operation is difficult in a dark environment (in an environment having no light).
  • laser-based SLAM self-location recognition is difficult in a dynamic environment (a moving object) and a repetitive environment (a similar pattern), accuracy in matching between the existing map and the current frame and loop closing is lowered, and it is difficult to make a landmark, whereby it is difficult to cope with a kidnapping situation.
  • features of different kinds of sensors such as the camera sensor 120b and the LiDAR sensor 175 may be applied complementarily, whereby SLAM performance may be remarkably improved.
  • encoder information and the iterative closest point (ICP) result of the LiDAR sensor 175 may be fused to create odometry information.
  • 3D restoration may be performed through feature matching between input images based on the odometry information, and the current location (the amount of displacement of the robot) may be calculated, whereby it is possible to accurately estimate the current location.
  • the estimated current location may be corrected to discriminate the final current location (S1170).
  • uncertainty of the estimated current location may be calculated considering surrounding geometry information based on sensing data of the LiDAR sensor 175, and correction may be performed in order to minimize the value of uncertainty, whereby it is possible to accurately discriminate the final current location.
  • uncertainty of the current location is a reliability value of the estimated current location, and may be calculated in the form of probability or dispersion.
  • uncertainty of the estimated current location may be calculated as covariance.
  • node information may be corrected using a node corresponding to the finally discriminated current location, and may be registered on the map.
  • the controller 140 may include a LiDAR service module 1020 (see FIG. 12) for receiving sensing data of the LiDAR sensor 175 and discriminating the amount of location displacement using geometry information based on the sensing data of the LiDAR sensor 175 and previous location information, and a vision service module 1030 (see FIG. 12) for receiving the amount of location displacement from the LiDAR service module 1020, receiving an image from the camera sensor 120b, discriminating the location of a feature point through matching between a feature point extracted from the current image based on the amount of location displacement and a feature point extracted from the previous location, and estimating the current location based on the discriminated location of the feature point.
  • the amount of location displacement may be the traveling displacement.
  • node information including the calculated current location information may be stored in the storage 130.
  • the vision service module 1030 may transmit the node information to the LiDAR service module 1020, and the LiDAR service module 1020 may reflect the amount of location displacement that the mobile robot 100 has moved while the vision service module 1030 calculates the current location in the node information to discriminate the current location of the mobile robot 100. That is, the current location may be corrected to discriminate the final current location(S1170).
  • the mobile robot 100 may include a traveling sensor for sensing the traveling state of the mobile robot based on the movement of the main body 110.
  • the mobile robot 100 may have a sensor, such as an encoder.
  • the controller 140 may further include a traveling service module 1010 (see FIG. 12) for reading sensing data of the traveling sensor, the traveling service module 1010 may transmit the sensing data of the traveling sensor to the LiDAR service module 1020, and the LiDAR service module 1020 may fuse odometry information based on the sensing data of the traveling sensor and the ICP result of the LiDAR sensor 175 to create the odometry information.
  • a traveling service module 1010 for reading sensing data of the traveling sensor
  • the traveling service module 1010 may transmit the sensing data of the traveling sensor to the LiDAR service module 1020
  • the LiDAR service module 1020 may fuse odometry information based on the sensing data of the traveling sensor and the ICP result of the LiDAR sensor 175 to create the odometry information.
  • the mobile robot 100 may perform loop closing based on a relative location between two adjacent nodes using graph-based SLAM technology.
  • the controller 140 may correct location data of each node such that the sum of error values of correlations between nodes constituting a path graph is minimized.
  • the controller 140 may calculate the current location based on sensing data of the LiDAR sensor 175 in an area having an illuminance less than a reference value, and may perform loop closing to correct an error when entering an area having an illuminance equal to or greater than the reference value. That is, LiDAR-based location recognition may be performed in a dark area having low illuminance. Since the LiDAR sensor 175 is not affected by illuminance, location recognition having the same performance is possible in a low-illuminance environment.
  • LiDAR-based SLAM has a shortcoming in that accuracy in loop closing is lowered. Consequently, loop closing may be performed after entering an area having sufficiently high illuminance. At this time, loop closing may be performed using image information acquired through the camera sensor 120b. That is, LiDAR-based SLAM may be performed in a low-illuminance environment, and vision-based SLAM, such as loop closing, may be performed in other environments.
  • a portion of the traveling zone may be dark, and a portion of the traveling zone may be bright.
  • the mobile robot 100 according to the embodiment of the present invention creates a mode using only the LiDAR sensor 175 and calculates the own location thereof when passing a dark area. At this time, a location error may be accumulated.
  • all node information including node information of a node created based on vision through loop closing and node information of a node created based on LiDAR may be optimized to minimize the accumulated errors.
  • the velocity of the mobile robot 100 may be decreased or the mobile robot 100 may be stopped, exposure of the camera sensor 120b may be maximized to obtain an image that is as bright as possible, and then vision-based SLAM may be performed.
  • each of the traveling service module 1010, the LiDAR service module 1020, and the vision service module 1030 may mean a software process or a main body that performs the software process.
  • FIGS. 12 and 13 are flowcharts showing a software process of the method of controlling the mobile robot according to the embodiment of the present invention, and show a fusion sequence of vision and LiDAR.
  • each of the traveling service module 1010, the LiDAR service module 1020, and the vision service module 1030 may be a software process.
  • FIGS. 14 to 18 are reference views illustrating the method of controlling the mobile robot according to the embodiment of the present invention.
  • the traveling service module 1010 may transmit sensing data of the traveling sensor, such as an encoder, to the LiDAR service module 1020 and the vision service module 1030 (S1210).
  • the traveling state of the mobile robot based on the movement of the main body 110 may be sensed through the traveling sensor, and the traveling service module 1010 may transmit the sensing data of the traveling sensor to the LiDAR service module 1020 and the vision service module 1030 (S1210).
  • the traveling service module 1010 may read the encoder valve of the wheel at a velocity of 50 Hz, and may transmit the same to the LiDAR service module 1020.
  • the vision service module 1030 may request odometry information from the LiDAR service module 1020 (S1220).
  • the LiDAR service module 1020 may respond to the request of the vision service module 1030 (S1225), and may create odometry information (S1240).
  • the LiDAR service module 1020 may receive sensing data of the LiDAR sensor 175, and may discriminate the amount of location displacement of the mobile robot 100 using geometry information based on the received sensing data of the LiDAR sensor 175 and previous location information.
  • the LiDAR service module 1020 may fuse odometry information based on the sensing data of the traveling sensor and the ICP result of the LiDAR sensor 175 to create odometry information, whereby the two data may not be used simply in parallel but may be used to accurately calculate odometry information.
  • the vision service module 1030 may request image data from a camera service module 1040 for reading image information acquired by the image acquisition unit 120 (S1230), and may receive image data from the camera service module 1040 (S1235).
  • the LiDAR service module 1020 may transmit information about the discriminated amount of location displacement to the vision service module 1030 (S1245).
  • the vision service module 1030 may receive information about the discriminated amount of location displacement from the LiDAR service module 1020 (S1245), may receive the image data from the camera service module 1040 (S1235), and may discriminate the location of a feature point through matching between a feature point extracted from the current image based on the amount of location displacement and a feature point extracted from the previous location (S1250), whereby an image feature point based on LiDAR-based odometry information may be matched (S1140).
  • controller 140 may register node information including the calculated current location information on the map, and may store the map having the added or updated node information in the storage 130.
  • the vision service module 1030 may transmit the node information to the LiDAR service module 1020 (S1260), and the LiDAR service module 1020 may calculate the amount of location displacement that the mobile robot 100 has moved while the vision service module 1030 calculates the current location and may reflect the same in the received node information to discriminate the final current location of the mobile robot 100 (S1270). That is, the LiDAR service module 1020 may correct the current location estimated by the vision service module 1030 to discriminate the final current location (S1170 and S1270).
  • the LiDAR service module 1020 may register the node information corresponding to the discriminated final location on the map, or may output the same to another module in the controller 140 (S1280).
  • a SLAM service module 1035 may perform a SLAM-related process as well as vision-based SLAM.
  • the SLAM service module 1035 may be realized to perform the function of the vision service module 1030.
  • the SLAM service module 1035 and the LiDAR service module 1020 may receive the encoder value of the wheel from the traveling service module 1010.
  • the SLAM service module 1035 may request LiDAR data from the LiDAR service module 1020 (S1310), and the LiDAR service module 1020 may transmit a response indicating that LiDAR data are ready to be provided to the SLAM service module 1035 (S1315).
  • the LiDAR service module 1020 may predict the current location based on the previous location of the mobile robot 100 and the encoder value of the wheel, may estimate the amount of location displacement and the current location using geometry information input from the LiDAR sensor 175 (S1330), and may transmit an estimated value to the SLAM service module 1035 (S1340).
  • the LiDAR service module 1020 may transmit odometry information including the amount of location displacement and a probability value of uncertainty in the form of covariance to the SLAM service module 1035.
  • the SLAM service module 1035 may request an image from the camera service module 1040 (S1320). At this time, the SLAM service module 1035 may request image information corresponding to the encoder value received from the traveling service module 1010.
  • the SLAM service module 1035 may receive an image from the camera service module 1040 (S1325), and may calculate the location of a 3D feature point through matching between a feature point extracted from the current image based on the amount of location displacement input from the LiDAR service module 1020 and a feature point extracted from the previous location (S1350).
  • the SLAM service module 1035 may calculate the amount of displacement that the mobile robot 100 has moved and the current location based on the calculated 3D points (S1350).
  • the SLAM service module 1035 may store the calculated result as node information having the form of a node.
  • the stored node information may include node index information to be registered, global pose information (X, Y, _), and a global uncertainty value.
  • the SLAM service module 1035 stores the calculated result in the form of a node, and provides the node information to the LiDAR service module 1020 (S1360).
  • the LiDAR service module 1020 may add the location to which the mobile robot has moved during calculation of the SLAM service module 1035 to find out the current location of the robot.
  • LiDAR SLAM using sensing data of the LiDAR sensor 175 has an advantage in that this SLAM is not affected by change in illuminance.
  • mapping and location recognition are possible using only LiDAR SLAM, and the LiDAR sensor 175 may be utilized as a sensor for sensing an obstacle and setting a traveling direction.
  • the environment A of FIG. 14 is an environment in which, after traveling while evading obstacles 1411, 1412, 1413, and 1414, the robot returns to the places to which the robot has moved, an error does not become bigger, whereby normal map creation is possible using only LiDAR SLAM.
  • the mobile robot 100 continuously moves while looking at new places in an environment shown in FIG. 15, whereby an error becomes bigger.
  • the mobile robot 100 returns to the first departure point 1500 in this state, it is difficult to know whether the place at which the mobile robot is located is the first departure point.
  • the environment B of FIG. 15 is an environment in which at least one 1511 of a plurality of obstacles 1511, 1512, and 1513 is present in the center of a space while having a large size, and therefore it is difficult to create a map due to a problem of loop closing in the case in which only LiDAR SLAM is used.
  • the mobile robot 100 moves along a predetermined path 1610, may discriminate whether any one point Px coincides with the departure point Po, and, upon determining that they are the same points, may perform optimization for minimizing an error, whereby graph-based SLAM may be performed.
  • the error may be corrected according to an error correction algorithm to modify path information based on an accurate path 1620, whereby accurate location recognition and map preparation are possible.
  • performance may be change depending on illuminance, and therefore a difference in performance may be generated as the quantity of features detected from an image is reduced due to low illuminance.
  • illuminance is very low, it is impossible to extract features from an acquired image, whereby mapping and location recognition may also be impossible.
  • a limitation in which operation is difficult in a low-illuminance environment in the case in which vision SLAM is used may be overcome through location recognition technology using the LiDAR sensor 175.
  • a map of LiDAR SLAM may be corrected by loop closing and error correction of vision SLAM, whereby it is possible to reduce a LiDAR mapping error in the environment B of FIG. 15.
  • LiDAR SLAM using the LiDAR sensor 175 and vision SLAM using the image acquisition unit 120 may be utilized complementarily, whereby stable self-location recognition is possible in both a dark area and a bright area during movement of the robot.
  • LiDAR SLAM using the LiDAR sensor 175 may be performed first (S1710), vision SLAM having an advantage in loop closing may be performed (S1720), and a map may be created and stored (S1730).
  • a map may be created using LiDAR SLAM (S1710), and the map created using LiDAR SLAM may be corrected through loop closing and error correction of vision SLAM (S1720), whereby a final map may be created (S1730).
  • FIG. 18 illustrates a map 1810 created using LiDAR SLAM and a map 1820 on which loop closing and error correction has been performed.
  • vision SLAM may be performed based on odometry information based on sensing data of the LiDAR sensor 175, as described with reference to FIGS. 1 to 13.
  • the movement amount of the mobile robot 100 for a time necessary to perform a vision SLAM operation process may be additionally reflected to discriminate the current location of the mobile robot 100.
  • a correlation with adjacent nodes may be calculated by the LiDAR sensor 175 even in the case in which image-based feature matching is not successfully performed. That is, in the case in which image-based feature matching is not successfully performed, information may be provided to the LiDAR service module 1020, and the LiDAR service module 1020 may create a correlation (constraint). Consequently, optimization using more plentiful correlations is possible.
  • the vision service module 1030 may further request a correlation from the LiDAR service module 1020.
  • the LiDAR service module 1020 may perform ICP matching between the current node and an adjacent node to add constraint. Constraint between nodes may be added therethrough, and therefore accurate location estimation is possible using constraints between many more nodes.
  • scale drift may occur.
  • LiDAR-based geometry information is considered together, however, it is possible to minimize scale drift.
  • FIG. 19 is a reference view illustrating SLAM according to an embodiment of the present invention, and shows an embodiment in which correlations based on data acquired by the camera sensor 120b and the LiDAR sensor 175, i.e. constraints, are optimized by the SLAM service module 1035, which is a SLAM framework.
  • the SLAM service module 1035 of FIG. 19 may perform a SLAM-related process as well as vision-based SLAM, may be realized to perform the function of the vision service module 1030, and may also be referred to as a visual-LiDAR SLAM service.
  • the SLAM service module 1035 may receive image data from the camera sensor 120b.
  • the LiDAR service module 1020 may receive sensing data from the LiDAR sensor 175.
  • the LiDAR service module 1020 and the SLAM service module 1035 may receive odometry information acquired by the traveling service module 1010 from the traveling service module 1010.
  • the encoder 1011 may transmit odometry information based on the operation of the wheel during traveling of the mobile robot 100 to the LiDAR service module 1020 and the SLAM service module 1035.
  • the SLAM service module 1035 may request information about the correlation acquired by the LiDAR service module 1020 from the LiDAR service module 1020 (S1910).
  • the SLAM service module 1035 may request information about location relative to the preceding frame from the LiDAR service module 1020.
  • the information about location relative to the preceding frame which is information about relative location from the preceding location to the current location of the mobile robot 100, may be the amount of location displacement or information obtained from the result of ICP matching.
  • the SLAM service module 1035 may request loop displacement (loop constraint) from the LiDAR service module 1020. For example, an index of a frame to be matched and loop displacement (loop constraint) matched within a local map range may be requested.
  • the LiDAR service module 1020 may respond to the request of the SLAM service module 1035 (S1920). For example, the LiDAR service module 1020 may provide information about location relative to the preceding frame and loop displacement (loop constraint) matched within the local map range to the SLAM service module 1035.
  • loop displacement loop constraint
  • the SLAM service module 1035 may combine constraints acquired from the camera sensor 120b and the LiDAR sensor 175.
  • the SLAM service module 1035 may fuse the result of vision SLAM with information received from the LiDAR service module 1020 to update node information and to create a SLAM map.
  • the SLAM service module 1035 may discriminate the current corrected location, and may correct a pose-graph including pose of all nodes.
  • the SLAM service module 1035 may discriminate the current location of the mobile robot 100, and may add, delete, or change node information of the SLAM map in order to create a SLAM map or to update the created SLAM map.
  • the SLAM service module 1035 may transmit the current location of the mobile robot 100, corrected pose-graph information, frame index information equivalent for a local map corresponding to the current location, the current node, deleted node, and connection node information to the LiDAR service module 1020 (S1930).
  • FIG. 20 is a reference view illustrating SLAM according to the embodiment of the present invention, and is a conceptual view showing the construction of a vision-LiDAR fusion SLAM service 2000.
  • the construction of the vision-LiDAR fusion SLAM service 2000 may be a software service, and vision SLAM and LiDAR SLAM may be different threads, which may operate asynchronously.
  • vision SLAM and LiDAR SLAM may be basically secured, and these may be combined by the vision-LiDAR fusion SLAM service 2000, whereby it is possible to secure improved performance.
  • a SLAM main 2010 may act as a hub for receiving data from each service module in the fusion SLAM service 2000, transmitting the same to a necessary service module, and receiving response therefrom.
  • Visual odometry (VO) 2050 may perform vision-based odometry discrimination for estimating the traveling distance from an image acquired by the camera sensor 120b.
  • the visual odometry 2050 may extract a feature point from the image acquired by the camera sensor 120b, and may perform feature extraction matching (FEM) 2070.
  • FEM feature extraction matching
  • a LiDAR service module 2015 performs ICP matching 2085 to acquire odometry information, and fusion SLAM is performed based thereon.
  • global pose may be discriminated using odometry information acquired through the visual odometry 2050 and/or odometry information acquired through ICP matching 2085.
  • a global pose tracker (GPT) 2020 may read the odometry information to discriminate global pose.
  • a global mapper (GM) 2030 may collect and optimize information discriminated within the local map range.
  • the global mapper 2030 may create a vocabulary tree (VT) 2060, which is a feature point dictionary.
  • a kidnap recovery (KR) 2040 may collect and optimize information discriminated within the local map range.
  • the SLAM main 2010 may obtain loop displacement (loop constraint) from the global pose tracker 2020. In addition, the SLAM main 2010 may transmit a new frame, the amount of location displacement, and loop displacement (loop constraint) to a thread of the global mapper 2030.
  • the SLAM main 2010 may obtain feature point information and corrected location of the new frame from the visual odometry 2050, and may match the new frame with a pose-graph node of the global mapper 2030 to create loop displacement (loop constraint).
  • the global pose tracker 2020 may perform location estimation, and the SLAM main 2010 may update node information of the pose-graph based on the estimated location information.
  • the SLAM main 2010 may discriminate the current corrected location, and may correct a pose-graph including pose of all nodes.
  • the SLAM main 2010 may discriminate the current location of the mobile robot 100, and may add, delete, or change node information of the SLAM map in order to create a SLAM map or to update the created SLAM map.
  • the mobile robot according to the present invention and the method of controlling the same are not limitedly applied to the constructions and methods of the embodiments as previously described; rather, all or some of the embodiments may be selectively combined to achieve various modifications.
  • the method of controlling the mobile robot according to the embodiment of the present invention may be implemented as code that can be written on a processor-readable recording medium and thus read by a processor.
  • the processor-readable recording medium may be any type of recording device in which data is stored in a processor-readable manner.
  • the processor-readable recording medium may include, for example, read only memory (ROM), random access memory (RAM), compact disc read only memory (CD-ROM), magnetic tape, a floppy disk, and an optical data storage device, and may be implemented in the form of a carrier wave transmitted over the Internet.
  • the processor-readable recording medium may be distributed over a plurality of computer systems connected to a network such that processor-readable code is written thereto and executed therefrom in a decentralized manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un robot mobile comprenant une unité de déplacement conçue pour déplacer un corps principal, un capteur LiDAR conçu pour acquérir des informations de géométrie à l'extérieur du corps principal, un capteur de caméra conçu pour acquérir une image de l'extérieur du corps principal, et un dispositif de commande conçu pour créer des informations d'odométrie sur la base des données de détection du capteur LiDAR et pour effectuer une correspondance de caractéristiques parmi des images entrées à partir du capteur de caméra, sur la base des informations d'odométrie, afin d'estimer l'emplacement actuel, ce qui permet de fusionner efficacement le capteur de caméra et le capteur LiDAR pour effectuer de manière précise une estimation d'emplacement.
EP20776680.9A 2019-03-27 2020-03-26 Robot mobile et son procédé de commande Pending EP3946841A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190035039A KR102243179B1 (ko) 2019-03-27 2019-03-27 이동 로봇 및 그 제어방법
PCT/KR2020/004147 WO2020197297A1 (fr) 2019-03-27 2020-03-26 Robot mobile et son procédé de commande

Publications (2)

Publication Number Publication Date
EP3946841A1 true EP3946841A1 (fr) 2022-02-09
EP3946841A4 EP3946841A4 (fr) 2022-12-28

Family

ID=72604014

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20776680.9A Pending EP3946841A4 (fr) 2019-03-27 2020-03-26 Robot mobile et son procédé de commande

Country Status (6)

Country Link
US (1) US11400600B2 (fr)
EP (1) EP3946841A4 (fr)
JP (1) JP7150773B2 (fr)
KR (1) KR102243179B1 (fr)
AU (1) AU2020247141B2 (fr)
WO (1) WO2020197297A1 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11348269B1 (en) * 2017-07-27 2022-05-31 AI Incorporated Method and apparatus for combining data to construct a floor plan
CN109828588A (zh) * 2019-03-11 2019-05-31 浙江工业大学 一种基于多传感器融合的机器人室内路径规划方法
US11960297B2 (en) * 2019-05-03 2024-04-16 Lg Electronics Inc. Robot generating map based on multi sensors and artificial intelligence and moving based on map
CN112215887B (zh) * 2019-07-09 2023-09-08 深圳市优必选科技股份有限公司 一种位姿确定方法、装置、存储介质及移动机器人
CA3158124A1 (fr) * 2019-10-15 2021-04-22 Alarm.Com Incorporated Navigation a l'aide de points de repere visuels selectionnes
US11741728B2 (en) * 2020-04-15 2023-08-29 Toyota Research Institute, Inc. Keypoint matching using graph convolutions
US11768504B2 (en) 2020-06-10 2023-09-26 AI Incorporated Light weight and real time slam for robots
KR20220064884A (ko) * 2020-11-12 2022-05-19 주식회사 유진로봇 3d slam 데이터 편집 장치 및 방법
EP4245474A1 (fr) * 2020-11-12 2023-09-20 Yujin Robot Co., Ltd. Système de robot
CN114714352B (zh) * 2020-12-29 2024-04-26 上海擎朗智能科技有限公司 机器人位姿信息确定方法、装置、设备及存储介质
CN112882475A (zh) * 2021-01-26 2021-06-01 大连华冶联自动化有限公司 麦克纳姆轮式全方位移动机器人的运动控制方法及装置
CN115129036A (zh) * 2021-03-26 2022-09-30 信泰光学(深圳)有限公司 移动装置及其移动方法
US20220358671A1 (en) * 2021-05-07 2022-11-10 Tencent America LLC Methods of estimating pose graph and transformation matrix between cameras by recognizing markers on the ground in panorama images
CN113359742B (zh) * 2021-06-18 2022-07-29 云鲸智能(深圳)有限公司 机器人及其越障方法、装置、计算机可读存储介质
CN113538579B (zh) * 2021-07-14 2023-09-22 浙江大学 基于无人机地图与地面双目信息的移动机器人定位方法
CN113552585B (zh) * 2021-07-14 2023-10-31 浙江大学 一种基于卫星地图与激光雷达信息的移动机器人定位方法
CN113777615B (zh) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 室内机器人的定位方法、系统及清洁机器人
JP2023019930A (ja) * 2021-07-30 2023-02-09 キヤノン株式会社 情報処理装置、移動体、情報処理装置の制御方法およびプログラム
CN114770541B (zh) * 2022-04-27 2022-10-21 南京农业大学 一种能够实现位移补偿的智能巡检机器人及智能巡检方法
KR20230161782A (ko) * 2022-05-19 2023-11-28 엘지전자 주식회사 이동 로봇 및 이동 로봇의 제어방법
WO2024019234A1 (fr) * 2022-07-21 2024-01-25 엘지전자 주식회사 Procédé de reconnaissance d'obstacle et robot mobile
CN116408808B (zh) * 2023-06-09 2023-08-01 未来机器人(深圳)有限公司 机器人取货检测方法及装置、机器人
CN116698046B (zh) * 2023-08-04 2023-12-01 苏州观瑞汽车技术有限公司 一种物业室内服务机器人建图定位和回环检测方法及系统

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100785784B1 (ko) * 2006-07-27 2007-12-13 한국전자통신연구원 인공표식과 오도메트리를 결합한 실시간 위치산출 시스템및 방법
KR101072876B1 (ko) 2009-03-18 2011-10-17 연세대학교 산학협력단 이동 로봇에서 자신의 위치를 추정하기 위한 방법 및 장치
US8209143B1 (en) * 2009-09-15 2012-06-26 Google Inc. Accurate alignment of multiple laser scans using a template surface
JP5776324B2 (ja) 2011-05-17 2015-09-09 富士通株式会社 地図処理方法及びプログラム、並びにロボットシステム
US9513107B2 (en) * 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
KR101319526B1 (ko) 2013-03-26 2013-10-21 고려대학교 산학협력단 이동 로봇을 이용하여 목표물의 위치 정보를 제공하기 위한 방법
JP2016024598A (ja) * 2014-07-18 2016-02-08 パナソニックIpマネジメント株式会社 自律移動装置の制御方法
JP6411917B2 (ja) * 2015-02-27 2018-10-24 株式会社日立製作所 自己位置推定装置および移動体
US9630319B2 (en) * 2015-03-18 2017-04-25 Irobot Corporation Localization and mapping using physical features
KR101738750B1 (ko) * 2015-06-11 2017-05-24 한국과학기술원 실외 환경에서의 강인한 위치 인식 방법 및 장치
US9684305B2 (en) 2015-09-11 2017-06-20 Fuji Xerox Co., Ltd. System and method for mobile robot teleoperation
JP6782903B2 (ja) * 2015-12-25 2020-11-11 学校法人千葉工業大学 自己運動推定システム、自己運動推定システムの制御方法及びプログラム
US11449061B2 (en) * 2016-02-29 2022-09-20 AI Incorporated Obstacle recognition method for autonomous robots
WO2017218234A1 (fr) * 2016-06-15 2017-12-21 Irobot Corporation Systèmes et procédés de commande d'un robot mobile autonome
JP6751603B2 (ja) * 2016-06-24 2020-09-09 株式会社Ihiエアロスペース コンテナターミナルシステム
JP6773471B2 (ja) 2016-07-26 2020-10-21 株式会社豊田中央研究所 自律移動体と環境地図更新装置
JP2018024598A (ja) 2016-08-09 2018-02-15 東洋合成工業株式会社 スルホニウム塩、光酸発生剤、それを含む組成物、及び、デバイスの製造方法
US10962647B2 (en) * 2016-11-30 2021-03-30 Yujin Robot Co., Ltd. Lidar apparatus based on time of flight and moving object
SG10201700299QA (en) * 2017-01-13 2018-08-30 Otsaw Digital Pte Ltd Three-dimensional mapping of an environment
JP2018156538A (ja) 2017-03-21 2018-10-04 カシオ計算機株式会社 自律移動装置、画像処理方法及びプログラム
US10539676B2 (en) * 2017-03-22 2020-01-21 Here Global B.V. Method, apparatus and computer program product for mapping and modeling a three dimensional structure
KR101956447B1 (ko) * 2017-04-20 2019-03-12 한국과학기술원 그래프 구조 기반의 무인체 위치 추정 장치 및 그 방법
US10921816B2 (en) * 2017-04-21 2021-02-16 Korea Advanced Institute Of Science And Technology Method and apparatus for producing map based on hierarchical structure using 2D laser scanner
CN111656135A (zh) * 2017-12-01 2020-09-11 迪普迈普有限公司 基于高清地图的定位优化

Also Published As

Publication number Publication date
JP2020161141A (ja) 2020-10-01
AU2020247141A1 (en) 2021-11-18
EP3946841A4 (fr) 2022-12-28
WO2020197297A1 (fr) 2020-10-01
JP7150773B2 (ja) 2022-10-11
KR102243179B1 (ko) 2021-04-21
KR20200119394A (ko) 2020-10-20
US20200306983A1 (en) 2020-10-01
US11400600B2 (en) 2022-08-02
AU2020247141B2 (en) 2023-05-11

Similar Documents

Publication Publication Date Title
AU2020247141B2 (en) Mobile robot and method of controlling the same
WO2018139796A1 (fr) Robot mobile et procédé de commande associé
WO2021010757A1 (fr) Robot mobile et son procédé de commande
AU2020244635B2 (en) Mobile robot control method
WO2018038552A1 (fr) Robot mobile et procédé de commande associé
WO2021006556A1 (fr) Robot mobile et son procédé de commande
WO2017188706A1 (fr) Robot mobile et procédé de commande de robot mobile
WO2018097574A1 (fr) Robot mobile et procédé de commande de celui-ci
WO2018155999A2 (fr) Robot mobile et son procédé de commande
WO2017188800A1 (fr) Robot mobile et son procédé de commande
WO2021006677A2 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
WO2021006542A1 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
AU2019262482B2 (en) Plurality of autonomous mobile robots and controlling method for the same
AU2019336870A1 (en) Plurality of autonomous mobile robots and controlling method for the same
WO2018074904A1 (fr) Robot mobile et procédé de commande du robot mobile
WO2018117616A1 (fr) Robot mobile
WO2017188708A2 (fr) Robot mobile, système destiné à de multiples robots mobiles et procédé d'apprentissage de carte pour robot mobile
WO2019004742A1 (fr) Système de robot comprenant un robot mobile et un terminal mobile
WO2021006553A1 (fr) Robot mobile et son procédé de commande
WO2022075610A1 (fr) Système de robot mobile
WO2022075616A1 (fr) Système de robot mobile
WO2022075615A1 (fr) Système de robot mobile
WO2021225234A1 (fr) Robot nettoyeur et son procédé de commande
WO2021006693A2 (fr) Robot mobile et son procédé de commande
WO2020050565A1 (fr) Pluralité de robots mobiles autonomes et procédé de commande de ces derniers

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211027

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20221130

RIC1 Information provided on ipc code assigned before grant

Ipc: G06V 10/80 20220101ALI20221124BHEP

Ipc: G06V 10/75 20220101ALI20221124BHEP

Ipc: G06V 10/44 20220101ALI20221124BHEP

Ipc: G06V 20/10 20220101ALI20221124BHEP

Ipc: G06K 9/62 20220101ALI20221124BHEP

Ipc: G05D 1/02 20200101ALI20221124BHEP

Ipc: G01S 7/48 20060101ALI20221124BHEP

Ipc: G01S 17/89 20200101ALI20221124BHEP

Ipc: G01S 17/86 20200101ALI20221124BHEP

Ipc: G01S 17/58 20060101ALI20221124BHEP

Ipc: G01C 22/00 20060101ALI20221124BHEP

Ipc: G01C 21/20 20060101ALI20221124BHEP

Ipc: B25J 5/00 20060101ALI20221124BHEP

Ipc: B25J 19/04 20060101ALI20221124BHEP

Ipc: B25J 13/08 20060101ALI20221124BHEP

Ipc: A47L 9/28 20060101ALI20221124BHEP

Ipc: B25J 9/16 20060101ALI20221124BHEP

Ipc: B25J 19/02 20060101ALI20221124BHEP

Ipc: B25J 11/00 20060101AFI20221124BHEP