WO2023224295A1 - Robot mobile et procédé de commande de robot mobile - Google Patents

Robot mobile et procédé de commande de robot mobile Download PDF

Info

Publication number
WO2023224295A1
WO2023224295A1 PCT/KR2023/006094 KR2023006094W WO2023224295A1 WO 2023224295 A1 WO2023224295 A1 WO 2023224295A1 KR 2023006094 W KR2023006094 W KR 2023006094W WO 2023224295 A1 WO2023224295 A1 WO 2023224295A1
Authority
WO
WIPO (PCT)
Prior art keywords
corner
mobile robot
main body
control unit
terrain information
Prior art date
Application number
PCT/KR2023/006094
Other languages
English (en)
Korean (ko)
Inventor
이헌철
이창현
최가형
Original Assignee
엘지전자 주식회사
금오공과대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사, 금오공과대학교 산학협력단 filed Critical 엘지전자 주식회사
Publication of WO2023224295A1 publication Critical patent/WO2023224295A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/022Optical sensing devices using lasers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to a robot vacuum cleaner and a control method of the robot cleaner, and more specifically to slam driving technology.
  • Robots have been developed for industrial use and have played a part in factory automation.
  • robots have been developed, and household robots that can be used in general homes are also being created.
  • mobile robots those that can travel on their own are called mobile robots.
  • a representative example of a mobile robot used at home is a robot vacuum cleaner.
  • Various technologies are known for detecting the environment and users around a robot cleaner through various sensors provided in the robot cleaner. Additionally, technologies are known in which a robot cleaner learns and maps the cleaning area on its own and determines the current location on the map.
  • a robot vacuum cleaner that cleans a cleaning area by traveling in a preset manner is known.
  • the robot receives a target direction and senses whether there is an obstacle in front, and when there is an obstacle in front, it adjusts at least one of the rotation direction, rotation speed, switching direction, and switching speed to reach the nearest obstacle.
  • a technology to avoid has been disclosed.
  • the robot moves using simple logic according to the location of the recognized obstacle, making it difficult to respond to obstacles that the robot does not recognize or obstacles that have no direction.
  • Patent Document 1 Korean Patent Publication Publication No. 10-2008-0090925 (Publication Date: October 19, 2008)
  • Patent Document 2 U.S. Patent Registration No. US7211980B1 (Publication Date: January 5, 2007)
  • the first task of the present invention is to provide a mobile robot capable of accurate slam while reducing the number of sensors in the mobile robot and using only laser-based sensors.
  • the second task of the present invention is to draw an accurate map with a minimum number of sensors when a robot vacuum cleaner is driving while drawing a map in a situation where there is no map.
  • the third task of the present invention is to correct the running of the mobile robot by accurately estimating the current position of the mobile robot at the corner in the presence of a map.
  • the fourth task of the present invention is to provide a mobile robot that estimates its current location, has fewer sensing elements to generate a map, and reduces the control burden on the control unit.
  • the present invention determines whether the current location of the main body is a corner of the driving area through a main body, a driving unit that moves the main body, a sensing unit that acquires terrain information outside the main body, and the terrain information obtained by the sensing unit, and the When the main body is located at the corner, it is characterized by including a control unit that controls a corner surrounding information acquisition motion to obtain terrain information around the corner through the sensing unit at the corner.
  • the corner surrounding information acquisition motion may acquire external terrain information through the sensing unit while the main body rotates at the corner.
  • the main body rotates in the first direction at the corner and then rotates in the second direction opposite to the first direction to obtain external terrain information through the sensing unit.
  • the first direction and the second direction may be perpendicular to the moving direction of the main body.
  • the second direction may coincide with the direction in which the main body travels after passing the corner.
  • the sensing unit may include a laser sensor that acquires terrain information within a certain angle based on the moving direction of the main body.
  • the corner surrounding information acquisition motion may obtain the terrain information by extracting the distance to feature points of the wall within a certain distance and within a certain angle from the corner.
  • the control unit may estimate the inclination of the wall based on the distance between the feature points of the wall and update the inclination of the wall in the map.
  • the control unit may estimate the current location of the main body based on the distance from the feature points of the wall.
  • the control unit may estimate the inclination of the wall based on the distance between the feature points of the wall and determine the heading direction of the main body based on the inclination of the wall.
  • the control unit may estimate the current location of the main body based on the terrain information around the corner obtained from the corner surrounding information acquisition motion.
  • the present invention further includes a storage unit for storing data, and the control unit can update the map based on the terrain information around the corner obtained from the motion for obtaining information around the corner.
  • the control unit may generate a map based on the terrain information around a plurality of corners and the location information of the plurality of corners obtained from the corner surrounding information acquisition motion.
  • the control unit may estimate the current location of the main body based on the terrain information around the corner obtained from the corner surrounding information acquisition motion.
  • the control unit may execute the corner surrounding information acquisition motion while the main body is wall-following.
  • the present invention includes a terrain information acquisition step in which the sensing unit acquires surrounding terrain information, a corner determination step in which the current location of the main body is determined to be a corner of the driving area, and when the current location of the main body is the corner, the corner is located at the corner. It may include a corner-surrounding terrain information acquisition step of acquiring terrain information around the corner.
  • external terrain information may be acquired through the sensing unit while the main body rotates at the corner.
  • the present invention may further include a current position estimation step of estimating the current position of the main body based on topographical information around the corner.
  • the present invention may further include a map updating step of updating the map based on topographical information around the corner.
  • the distance to feature points of the wall within a certain distance and within a certain angle from the corner may be extracted.
  • the present invention enables slamming with only 1-3 laser-based obstacle detection sensors 171 installed on the main body, thereby reducing the manufacturing cost of the mobile robot and accurately estimating the current position of the mobile robot at the corner, thereby providing accurate and quick operation. There are advantages to driving.
  • the present invention has the advantage of being able to provide an accurate map with a minimum number of sensors when a robot vacuum cleaner is driving while drawing a map in a situation where there is no map and reducing the time to draw the map.
  • the present invention has the advantage of reducing cleaning time and sensing time compared to rotating 360 degrees because the mobile robot acquires information around the corner while rotating 270 degrees at the corner, and the direction angle at which the mobile robot completes the rotation is different from that of the mobile robot. Since it is in the heading direction, there is an advantage of increased cleaning efficiency.
  • the present invention has the advantage that there are fewer sensing elements for the mobile robot to estimate the current location and generate a map, and that the control burden on the controller is less.
  • Figure 1 is a perspective view showing a mobile robot and a charging base for charging the mobile robot according to another embodiment of the present invention.
  • Figure 2 is a block diagram showing the control relationship between main components of a mobile robot according to an embodiment of the present invention.
  • Figure 3 is a flowchart showing a control method of a mobile robot according to an embodiment of the present invention.
  • FIGS 4 to 6 are diagrams referenced in the description of the control method of Figure 3.
  • Figure 7 is a diagram illustrating the concept of updating the location of a mobile robot through terrain information around a corner.
  • Figure 8 is a diagram showing a method of controlling a mobile robot according to another embodiment of the present invention.
  • Figure 9 is a diagram explaining the loop closing method of the present invention.
  • module and “part” for components used in the following description are simply given in consideration of the ease of writing this specification, and do not give any particularly important meaning or role in and of themselves. Accordingly, the terms “module” and “unit” may be used interchangeably.
  • the mobile robot 100 refers to a robot that can move on its own using wheels, etc., and may be a home helper robot or a robot vacuum cleaner.
  • a robot cleaner having a cleaning function among the mobile robots 100 will be described as an example, but the present invention is not limited thereto.
  • the mobile robot 100 refers to a robot that can move on its own using wheels, etc. Accordingly, the mobile robot 100 may be a self-moving guide robot, cleaning robot, entertainment robot, home helper robot, security robot, etc., and the present invention is not limited to the type of mobile robot 100.
  • Figure 1 shows a mobile robot 100, a cleaning robot, as an embodiment of the present invention.
  • the mobile robot 100 is equipped with a cleaning device 155 such as a brush and can clean a specific space while moving on its own.
  • a cleaning device 155 such as a brush and can clean a specific space while moving on its own.
  • the mobile robot 100 includes a sensing unit 170 (170:171, 175) capable of detecting information about the surroundings.
  • the mobile robot 100 effectively combines vision-based location recognition using a camera and LiDAR-based location recognition technology using a laser to perform location recognition and map generation that are robust to environmental changes such as changes in illumination and product location changes. .
  • the mobile robot 100 can perform location recognition and map creation using LIDAR-based location recognition technology using a laser.
  • the image acquisition unit 120 captures images of the driving area and may include one or more camera sensors that acquire images of the outside of the main body 110.
  • the image acquisition unit 120 may include a camera module.
  • the camera module may include a digital camera.
  • a digital camera includes an image sensor (e.g., CMOS image sensor) that includes at least one optical lens and a plurality of photodiodes (e.g., pixels) that form an image by light passing through the optical lens, It may include a digital signal processor (DSP) that configures an image based on signals output from photodiodes.
  • DSP digital signal processor
  • a digital signal processor is capable of generating not only still images, but also moving images composed of frames composed of still images.
  • the image acquisition unit 120 is equipped with a front camera sensor to acquire an image in front of the main body 110, but the location and shooting range of the image acquisition unit 120 are not necessarily limited to this. .
  • the mobile robot 100 is equipped only with a camera sensor that acquires images of the front within the driving area, and can perform vision-based location recognition and driving.
  • the image acquisition unit 120 of the mobile robot 100 includes a camera sensor (not shown) disposed at an angle with respect to one surface of the main body 110 and configured to capture both the front and the top. It can be included. In other words, you can shoot both the front and the top with one camera sensor.
  • the control unit 140 may separate the front image and the upper image from the image captured by the camera based on the angle of view.
  • the separated front image can be used for vision-based object recognition, like the image acquired from the front camera sensor. Additionally, the separated upper image, like the image acquired from the upper camera sensor, can be used for vision-based location recognition and driving.
  • the mobile robot 100 can perform a vision slam to recognize the current location by comparing surrounding images with pre-stored image-based information or by comparing acquired images.
  • the image acquisition unit 120 may also include a plurality of front camera sensors and/or upper camera sensors.
  • the image acquisition unit 120 may be provided with a plurality of camera sensors (not shown) configured to capture both the front and the top.
  • cameras are installed in some parts (e.g., front, rear, bottom) of the mobile robot 100, and images can be continuously acquired during cleaning. Multiple such cameras may be installed in each area for filming efficiency.
  • the image captured by the camera can be used to recognize the type of material such as dust, hair, floor, etc. present in the space, and to determine whether or when to clean.
  • the front camera sensor can capture situations of obstacles or cleaning areas in front of the mobile robot 100 in its traveling direction.
  • the image acquisition unit 120 can acquire a plurality of images by continuously photographing the surroundings of the main body 110, and the plurality of acquired images can be stored in the storage unit.
  • the mobile robot 100 can increase the accuracy of obstacle recognition by using a plurality of images, or by selecting one or more images from among the plurality of images and using effective data.
  • the sensing unit 170 may include a LIDAR sensor 175 that acquires topographical information on the outside of the main body 110 using a laser.
  • the LiDAR sensor 175 outputs a laser and provides information such as the distance, position direction, and material of the object that reflected the laser, and can obtain topographic information of the driving area.
  • the mobile robot 100 can obtain 360-degree topography information using the lidar sensor 175.
  • the mobile robot 100 can determine the distance, location, and direction of objects sensed by the LiDAR sensor 175, and generate a map while driving accordingly.
  • the mobile robot 100 can obtain topographic information of the driving area by analyzing the laser reception pattern, such as the time difference or signal strength of the laser reflected and received from the outside. Additionally, the mobile robot 100 may generate a map using terrain information acquired through the LiDAR sensor 175.
  • the mobile robot 100 compares the surrounding terrain information acquired at the current location through the LiDAR sensor 175 with previously stored terrain information based on the LiDAR sensor or compares the acquired terrain information. You can perform a lidar slam that recognizes your current location.
  • the mobile robot 100 effectively combines vision-based location recognition using a camera and LiDAR-based location recognition technology using a laser to provide a location that is robust to environmental changes such as changes in illumination and product location changes. Recognition and map creation can be performed.
  • the sensing unit 170 may include sensors 171 that sense various data related to the operation and state of the mobile robot 100.
  • the sensing unit 170 may include an obstacle detection sensor 171 that detects an obstacle in front. Additionally, the sensing unit 170 may further include a cliff detection sensor that detects the presence of a cliff on the floor within the driving area and a lower camera sensor that acquires an image of the floor.
  • the obstacle detection sensor 171 may include a plurality of sensors installed at regular intervals on the outer peripheral surface of the mobile robot 100.
  • the obstacle detection sensor 171 may include a laser sensor, an infrared sensor, an ultrasonic sensor, an RF sensor, a geomagnetic sensor, and a Position Sensitive Device (PSD) sensor.
  • a laser sensor an infrared sensor
  • an ultrasonic sensor an ultrasonic sensor
  • an RF sensor an RF sensor
  • a geomagnetic sensor a geomagnetic sensor
  • PSD Position Sensitive Device
  • the obstacle detection sensor 171 may include a laser sensor that acquires terrain information within a certain angle based on the moving direction of the main body 110.
  • the location and type of sensors included in the obstacle detection sensor 171 may vary depending on the model of the mobile robot 100, and the obstacle detection sensor 171 may include more diverse sensors.
  • the obstacle detection sensor 171 is a sensor that detects the distance to an indoor wall or obstacle.
  • the present invention is not limited to its type, but will be described below by taking an ultrasonic sensor as an example.
  • the obstacle detection sensor 171 detects objects, especially obstacles, present in the driving (movement) direction of the mobile robot 100 and transmits obstacle information to the control unit 140. That is, the obstacle detection sensor 171 can detect the movement path of the mobile robot 100, protrusions present in front or on the side, household fixtures, furniture, walls, wall corners, etc., and transmit the information to the control unit. .
  • This mobile robot 100 is equipped with a display (not shown) and can display a predetermined image such as a user interface screen. Additionally, the display is composed of a touch screen and can be used as an input means.
  • the mobile robot 100 may receive user input through touch, voice input, etc., and display information about objects and places corresponding to the user input on the display screen.
  • This mobile robot 100 can perform a given task, that is, cleaning, while traveling in a specific space.
  • the mobile robot 100 can perform autonomous driving in which it moves by creating a path to a predetermined destination, or tracking driving in which it moves while following a person or another robot.
  • the mobile robot 100 detects and avoids obstacles while moving based on the image data acquired through the image acquisition unit 120 and the sensing data acquired by the sensing unit 170. You can.
  • the mobile robot 100 in FIG. 1 is a cleaning service that can provide cleaning services in various spaces, for example, airports, hotels, supermarkets, clothing stores, logistics, hospitals, etc., especially large-area spaces such as commercial spaces. It may be a robot 100.
  • the mobile robot 100 may be linked to a server (not shown) that can manage and control it.
  • the server can remotely monitor and control the status of a plurality of robots 100 and provide effective services.
  • the mobile robot 100 and the server may be equipped with a communication means (not shown) that supports one or more communication standards and can communicate with each other. Additionally, the mobile robot 100 and the server can communicate with a PC, a mobile terminal, and other external servers. For example, the mobile robot 100 and the server may communicate using Message Queuing Telemetry Transport (MQTT) or HyperText Transfer Protocol (HTTP). Additionally, the mobile robot 100 and the server can communicate with a PC, a mobile terminal, or another external server using HTTP or MQTT.
  • MQTT Message Queuing Telemetry Transport
  • HTTP HyperText Transfer Protocol
  • the mobile robot 100 and the server support two or more communication standards and can use the optimal communication standard depending on the type of communication data and the type of device participating in communication.
  • the server is implemented as a cloud server, so users can use the data stored on the server and the functions and services provided by the server through various devices such as PCs and mobile terminals.
  • the user can check or control information about the mobile robot 100 in the robot system through a PC, mobile terminal, etc.
  • a 'user' refers to a person who uses a service provided by at least one robot, such as an individual customer who purchases or rents a robot and uses it at home, etc., and a company manager who provides services to employees or customers using a robot. This may include employees and customers who use the services provided by these companies. Therefore, 'users' may include individual customers (Business to Consumer: B2C) and corporate customers (Business to Business: B2B).
  • the user can monitor the status and location of the mobile robot 100 and manage content and work schedule through a PC, mobile terminal, etc. Meanwhile, the server may store and manage information received from the mobile robot 100 and other devices.
  • the mobile robot 100 and the server may be equipped with a communication means (not shown) that supports one or more communication standards and can communicate with each other.
  • the mobile robot 100 may transmit data related to space, objects, and usage to the server.
  • the space and object-related data are data related to the recognition of space and objects recognized by the robot 100, or the space and object acquired by the image acquisition unit 120. It may be image data about an object.
  • the mobile robot 100 and the server include artificial neural networks (ANN) in the form of software or hardware learned to recognize at least one of the properties of objects such as users, voices, spatial properties, and obstacles. can do.
  • ANN artificial neural networks
  • the robot 100 and the server use deep neural networks such as CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), and DBN (Deep Belief Network) learned through deep learning.
  • Neural Network may be included.
  • the control unit 140 (see 140 in FIG. 2) of the robot 100 may be equipped with a deep neural network (DNN) structure, such as a convolutional neural network (CNN).
  • the server can learn a deep neural network (DNN) based on data received from the mobile robot 100, data input by the user, etc., and then transmit the updated deep neural network (DNN) structure data to the robot 100. . Accordingly, the deep neural network (DNN) structure of artificial intelligence provided by the mobile robot 100 can be updated.
  • DNN deep neural network
  • usage-related data is data acquired according to the use of a certain product, for example, the robot 100, and may include usage history data, sensing data obtained from the sensing unit 170, etc. You can.
  • the learned deep neural network structure can receive input data for recognition, recognize the attributes of people, objects, and spaces included in the input data, and output the results.
  • the learned deep neural network structure receives input data for recognition, analyzes and learns data related to the usage of the mobile robot 100, and can recognize usage patterns, usage environments, etc. .
  • data related to space, objects, and usage may be transmitted to the server through the communication unit (see 190 in FIG. 2).
  • the server may learn a deep neural network (DNN) based on the received data and then transmit the updated deep neural network (DNN) structure data to the mobile robot 100 to update it.
  • DNN deep neural network
  • the mobile robot 100 can become increasingly smarter and provide a user experience (UX) that evolves as it is used.
  • UX user experience
  • the robot 100 and the server 10 can also use external information.
  • the server 10 may provide an excellent user experience by comprehensively using external information obtained from other linked service servers 20 and 30.
  • the mobile robot 100 and/or the server can perform voice recognition, so that the user's voice can be used as an input for controlling the robot 100.
  • the mobile robot 100 can provide more diverse and active control functions to the user by actively providing information first or outputting a voice recommending a function or service.
  • FIG. 2 is a block diagram showing the control relationship between the main components of the mobile robot 100 according to an embodiment of the present invention.
  • the block diagram of FIG. 2 is applicable to both the mobile robot 100 of FIG. 1 and the mobile robot 100 of FIG. 1, and will be described below along with the configuration of the mobile robot 100 of FIG. 1.
  • the mobile robot 100 includes a traveling unit 160 that moves the main body 110.
  • the traveling unit 160 includes at least one driving wheel 136 that moves the main body 110.
  • the traveling unit 160 is connected to the driving wheel 136 and includes a driving motor (not shown) that rotates the driving wheel.
  • the driving wheels 136 may be provided on the left and right sides of the main body 110, respectively, and are hereinafter referred to as left wheels (L) and right wheels (R), respectively.
  • the left wheel (L) and right wheel (R) may be driven by a single drive motor, but if necessary, a left wheel drive motor for driving the left wheel (L) and a right wheel drive motor for driving the right wheel (R) may be provided, respectively. there is.
  • the driving direction of the main body 110 can be switched to the left or right by making a difference in the rotation speed of the left wheel (L) and right wheel (R).
  • the mobile robot 100 includes a service unit 150 to provide a predetermined service.
  • 1 and 1 illustrate the present invention by taking an example in which the service unit 150 performs cleaning work, but the present invention is not limited thereto.
  • the service unit 150 may be equipped to provide household services such as cleaning (sweeping, vacuuming, mopping, etc.), dishwashing, cooking, laundry, and garbage disposal to the user.
  • the service unit 150 may perform a security function that detects external intruders or dangerous situations in the surrounding area.
  • the mobile robot 100 can clean the floor by the service unit 150 while moving around the driving area.
  • the service unit 150 includes a suction device for sucking in foreign substances, brushes 135 and 155 for mopping, a dust bin (not shown) for storing foreign substances collected by the suction device or brush, and/or a mop for mopping. It may include parts (not shown), etc.
  • An intake port through which air is sucked may be formed on the bottom of the main body 110 of the mobile robot 100 of FIG. 1, and a suction device that provides suction force so that air can be sucked in through the intake port is inside the main body 110. (not shown) and a dust bin (not shown) that collects dust sucked in with air through the intake port may be provided.
  • the main body 110 may include a case 111 that forms a space in which various parts constituting the mobile robot 100 are accommodated.
  • An opening for inserting and removing the dust bin may be formed in the case 111, and a dust bin cover 112 that opens and closes the opening may be provided to be rotatable with respect to the case 111.
  • a roll-type main brush having brushes exposed through the suction port, and an auxiliary brush 155 located on the front side of the bottom of the main body 110 and having a brush composed of a plurality of radially extending wings may be provided. Dust is separated from the floor in the driving area by the rotation of these brushes 155, and the dust separated from the floor is sucked in through the intake port and collected in the dust bin.
  • the battery supplies power necessary for not only the drive motor but also the overall operation of the mobile robot 100.
  • the mobile robot 100 can return to the charging station 200 for charging. During this return driving, the mobile robot 100 can detect the position of the charging station 200 by itself. You can.
  • the charging base 200 may include a signal transmitting unit (not shown) that transmits a predetermined return signal.
  • the return signal may be an ultrasonic signal or an infrared signal, but is not necessarily limited thereto.
  • the mobile robot 100 of FIG. 1 may include a signal detection unit (not shown) that receives a return signal.
  • the charging base 200 transmits an infrared signal through a signal transmission unit, and the signal detection unit may include an infrared sensor that detects the infrared signal.
  • the mobile robot 100 moves to the location of the charging station 200 and docks with the charging station 200 according to the infrared signal transmitted from the charging station 200. By this docking, charging is performed between the charging terminal 133 of the mobile robot 100 and the charging terminal 210 of the charging stand 200.
  • the mobile robot 100 may include a sensing unit 170 that senses internal/external information of the mobile robot 100.
  • the sensing unit 170 may include one or more sensors 171 and 175 that detect various types of information about the driving area, and an image acquisition unit 120 that acquires image information about the driving area.
  • the image acquisition unit 120 may be separately provided outside the sensing unit 170.
  • the mobile robot 100 can map the driving area through information detected by the sensing unit 170. For example, the mobile robot 100 may perform vision-based location recognition and map generation based on the ceiling image of the driving area acquired by the image acquisition unit 120. Additionally, the mobile robot 100 can perform location recognition and map creation based on a Light Detection And Ranging (LiDAR) sensor 175 that uses a laser.
  • LiDAR Light Detection And Ranging
  • the mobile robot 100 effectively combines vision-based location recognition using a camera and LiDAR-based location recognition technology using a laser to provide location recognition that is robust to environmental changes such as changes in illumination and product location changes. and map creation can be performed.
  • the image acquisition unit 120 captures images of the driving area and may include one or more camera sensors that acquire images of the outside of the main body 110.
  • the image acquisition unit 120 may include a camera module.
  • the camera module may include a digital camera.
  • a digital camera includes an image sensor (e.g., CMOS image sensor) that includes at least one optical lens and a plurality of photodiodes (e.g., pixels) that form an image by light passing through the optical lens, It may include a digital signal processor (DSP) that configures an image based on signals output from photodiodes.
  • DSP digital signal processor
  • a digital signal processor is capable of generating not only still images, but also moving images composed of frames composed of still images.
  • the image acquisition unit 120 is provided on the upper surface of the main body 110 and the front camera sensor 120a, which is provided to acquire an image of the front of the main body 110, and captures an image of the ceiling within the driving area.
  • the front camera sensor 120a is provided to acquire an image of the front of the main body 110, and captures an image of the ceiling within the driving area.
  • the location and shooting range of the image acquisition unit 120 are not necessarily limited thereto.
  • the mobile robot 100 is equipped only with an upper camera sensor 120b that acquires an image of the ceiling within the driving area, and can perform vision-based location recognition and driving.
  • the image acquisition unit 120 of the mobile robot 100 includes a camera sensor (not shown) disposed at an angle with respect to one surface of the main body 110 and configured to capture both the front and the top. It can be included. In other words, you can shoot both the front and the top with one camera sensor.
  • the control unit 140 may separate the front image and the upper image from the image captured by the camera based on the angle of view.
  • the separated front image like the image acquired from the front camera sensor 120a, can be used for vision-based object recognition.
  • the separated upper image like the image acquired from the upper camera sensor 120b, can be used for vision-based location recognition and driving.
  • the mobile robot 100 can perform a vision slam to recognize the current location by comparing surrounding images with pre-stored image-based information or by comparing acquired images.
  • the image acquisition unit 120 may be provided with a plurality of front camera sensors 120a and/or upper camera sensors 120b. Alternatively, the image acquisition unit 120 may be provided with a plurality of camera sensors (not shown) configured to capture both the front and the top.
  • cameras are installed in some parts (e.g., front, rear, bottom) of the mobile robot 100, and images can be continuously acquired during cleaning. Multiple such cameras may be installed in each area for filming efficiency.
  • the image captured by the camera can be used to recognize the type of material such as dust, hair, floor, etc. present in the space, whether it has been cleaned, or to check the time of cleaning.
  • the front camera sensor 120a can capture the situation of an obstacle or cleaning area in front of the mobile robot 100 in its traveling direction.
  • the image acquisition unit 120 can acquire a plurality of images by continuously photographing the surroundings of the main body 110, and the acquired plurality of images can be stored in the storage unit 130. there is.
  • the mobile robot 100 can increase the accuracy of obstacle recognition by using a plurality of images, or by selecting one or more images from among the plurality of images and using effective data.
  • the sensing unit 170 may include a LIDAR sensor 175 that acquires topographical information on the outside of the main body 110 using a laser.
  • the LiDAR sensor 175 outputs a laser and provides information such as the distance, position direction, and material of the object that reflected the laser, and can obtain topographic information of the driving area.
  • the mobile robot 100 can obtain 360-degree topography information using the lidar sensor 175.
  • the mobile robot 100 can generate a map by determining the distance, location, and direction of objects sensed by the LiDAR sensor 175.
  • the mobile robot 100 can obtain topographic information of the driving area by analyzing the laser reception pattern, such as the time difference or signal strength of the laser reflected and received from the outside. Additionally, the mobile robot 100 may generate a map using terrain information acquired through the LiDAR sensor 175.
  • the mobile robot 100 may perform a LiDAR slam to determine the direction of movement by analyzing surrounding terrain information acquired at the current location through the LiDAR sensor 175.
  • the mobile robot 100 effectively recognizes obstacles through vision-based location recognition using a camera, lidar-based location recognition technology using a laser, and an ultrasonic sensor, and provides optimal movement with a small amount of change.
  • Map creation can be performed by extracting directions.
  • the sensing unit 170 may include sensors 171, 172, and 179 that sense various data related to the operation and state of the mobile robot 100.
  • the sensing unit 170 may include an obstacle detection sensor 171 that detects an obstacle in front. Additionally, the sensing unit 170 may further include a cliff detection sensor 172 that detects the presence of a cliff on the floor within the driving area and a lower camera sensor 179 that acquires an image of the floor.
  • the obstacle detection sensor 171 may include a plurality of sensors installed at regular intervals on the outer peripheral surface of the mobile robot 100.
  • the obstacle detection sensor 171 may include an infrared sensor, an ultrasonic sensor, an RF sensor, a geomagnetic sensor, a Position Sensitive Device (PSD) sensor, etc.
  • the location and type of sensors included in the obstacle detection sensor 171 may vary depending on the model of the mobile robot 100, and the obstacle detection sensor 171 may include more diverse sensors.
  • the obstacle detection sensor 171 is a sensor that detects the distance to an indoor wall or obstacle.
  • the present invention is not limited to its type, but will be described below by taking an ultrasonic sensor as an example.
  • the obstacle detection sensor 171 detects objects, especially obstacles, present in the driving (movement) direction of the mobile robot 100 and transmits obstacle information to the control unit 140. That is, the obstacle detection sensor 171 detects the movement path of the mobile robot 100, protrusions present in front or on the side, household fixtures, furniture, walls, wall corners, etc., and transmits the information to the control unit 140. You can.
  • control unit 140 detects the location of the obstacle based on at least one signal received through an ultrasonic sensor, and controls the movement of the mobile robot 100 according to the location of the detected obstacle to determine the optimal location when generating a map. Provides a travel route.
  • the obstacle detection sensor 131 provided on the outer surface of the case 110 may be configured to include a transmitting unit and a receiving unit.
  • an ultrasonic sensor may be provided with at least one transmitting unit and at least one receiving unit staggered from each other. Accordingly, signals can be radiated at various angles and signals reflected by obstacles can be received at various angles.
  • the signal received from the obstacle detection sensor 171 may undergo signal processing such as amplification and filtering, and then the distance and direction to the obstacle may be calculated.
  • the sensing unit 170 may further include a driving detection sensor that detects the driving motion of the mobile robot 100 according to the driving of the main body 110 and outputs motion information.
  • a driving detection sensor that detects the driving motion of the mobile robot 100 according to the driving of the main body 110 and outputs motion information.
  • driving detection sensors a gyro sensor, wheel sensor, acceleration sensor, etc. can be used.
  • the mobile robot 100 may further include a battery detection unit (not shown) that detects the charging state of the battery and transmits the detection result to the control unit 140.
  • the battery is connected to the battery detection unit, and the remaining battery capacity and charging status are transmitted to the control unit 140.
  • the remaining battery capacity may be displayed on the screen of the output unit (not shown).
  • the mobile robot 100 includes a manipulation unit 137 that can turn on/off or input various commands. Various control commands necessary for the overall operation of the mobile robot 100 can be input through the manipulation unit 137. Additionally, the mobile robot 100 may include an output unit (not shown) to display reservation information, battery status, operation mode, operation status, error status, etc.
  • the mobile robot 100 includes a control unit 140 that processes and determines various information, such as recognizing the current location, and a storage unit 130 that stores various data. Additionally, the mobile robot 100 may further include a communication unit 190 that transmits and receives data with other devices.
  • the external terminal has an application for controlling the mobile robot 100, displays a map of the driving area to be cleaned by the mobile robot 100 through execution of the application, and displays a map of the driving area to be cleaned by the mobile robot 100. You can designate an area to clean a specific area.
  • the user terminal can communicate with the mobile robot 100 and display the current location of the mobile robot 100 along with a map, and information about a plurality of areas can be displayed. Additionally, the user terminal updates and displays the location of the mobile robot 100 as it travels.
  • the control unit 140 controls the overall operation of the mobile robot 100 by controlling the sensing unit 170, the manipulation unit 137, and the traveling unit 160 that constitute the mobile robot 100.
  • the storage unit 130 records various information necessary for controlling the mobile robot 100 and may include a volatile or non-volatile recording medium.
  • a recording medium stores data that can be read by a microprocessor, and is not limited to its type or implementation method.
  • a map of the driving area may be stored in the storage unit 130.
  • the map may be input by a user terminal or server that can exchange information with the mobile robot 100 through wired or wireless communication, or may be generated by the mobile robot 100 through self-learning.
  • the map may display the locations of rooms within the driving area. Additionally, the current location of the mobile robot 100 may be displayed on the map, and the current location of the mobile robot 100 on the map may be updated during the driving process.
  • the external terminal stores the same map as the map stored in the storage unit 130.
  • the storage unit 130 may store cleaning history information. Such cleaning history information may be generated each time cleaning is performed.
  • the map of the driving area stored in the storage unit 130 includes a navigation map used for driving during cleaning, a simultaneous localization and mapping (SLAM) map used for location recognition, and a map used for location recognition when hitting an obstacle.
  • This may be a learning map used for learning and cleaning by storing the relevant information, a global location map used for global location recognition, and an obstacle recognition map in which information about recognized obstacles is recorded.
  • maps can be stored and managed separately in the storage unit 130 according to purpose, but the maps may not be clearly divided by purpose.
  • a plurality of information may be stored in one map so that it can be used for at least one purpose.
  • the control unit 140 may include a driving control module 141, a location recognition module 142, a map generation module 143, and an obstacle recognition module 144.
  • the travel control module 141 controls the travel of the mobile robot 100 and controls the driving of the travel unit 160 according to travel settings. Additionally, the travel control module 141 can determine the travel path of the mobile robot 100 based on the operation of the travel unit 160. For example, the travel control module 141 can determine the current or past movement speed and distance traveled of the mobile robot 100 based on the rotation speed of the driving wheels, and can determine the current or past movement speed, distance traveled, etc. of the mobile robot 100 based on the rotation speed of each driving wheel. Alternatively, past redirection processes can also be identified. Based on the driving information of the mobile robot 100 identified in this way, the location of the mobile robot 100 on the map may be updated.
  • the map generation module 143 can generate a map of the driving area.
  • the map creation module 143 can create a map by processing the image acquired through the image acquisition unit 120. For example, a map corresponding to the driving area and a cleaning map corresponding to the cleaning area can be created.
  • the map generation module 143 can process images acquired through the image acquisition unit 120 at each location and associate them with a map to recognize the global location.
  • the map creation module 143 can create a map based on the information acquired through the LiDAR sensor 175 and recognize the location based on the information acquired through the LiDAR sensor 175 at each location. there is.
  • the map creation module 143 can create a map based on the information acquired through the obstacle detection sensor 171 and recognize the location based on the information acquired through the obstacle detection sensor 171 at each location.
  • the map generation module 143 can create a map and perform location recognition based on information acquired through the image acquisition unit 120 and the LiDAR sensor 175.
  • the location recognition module 142 estimates and recognizes the current location.
  • the location recognition module 142 uses the image information of the image acquisition unit 120 to identify the location in conjunction with the map generation module 143, thereby estimating the current location even when the location of the mobile robot 100 suddenly changes. It can be recognized.
  • the mobile robot 100 is capable of recognizing its location during continuous driving through the location recognition module 142, and also uses the travel control module 141, the map generation module 143, and the obstacle recognition module ( 144), you can learn the map and estimate the current location.
  • the mobile robot 100 acquires an image through the image acquisition unit 120 at an unknown current location. Through the video, various features such as lights located on the ceiling, edges, corners, blobs, and ridges are confirmed.
  • control unit 140 can divide driving areas and create a map consisting of a plurality of areas, or recognize the current location of the main body 110 based on a pre-stored map.
  • control unit 140 can fuse information acquired through the image acquisition unit 120 and the LiDAR sensor 175 to create a map and perform location recognition.
  • control unit 140 may transmit the generated map to an external terminal, server, etc. through the communication unit 190. Additionally, as described above, when a map is received from an external terminal, server, etc., the control unit 140 can store it in the storage unit 130.
  • the control unit 140 transmits the updated information to an external terminal so that the maps stored in the external terminal and the mobile robot 100 are the same.
  • the mobile robot 100 can clean the designated area in response to a cleaning command from the mobile terminal, and also the mobile robot 100 can be sent to the external terminal. This is so that the current location can be displayed.
  • the map divides the cleaning area into a plurality of areas, includes a connecting passage connecting the plurality of areas, and may include information about obstacles within the area.
  • the control unit 140 determines whether the location on the map matches the current location of the mobile robot 100. Cleaning commands can be entered from a remote control, control panel, or external terminal.
  • control unit 140 recognizes the current location, restores the current location of the mobile robot 100, and then uses the current location based on the current location.
  • the traveling unit 160 can be controlled to move to a designated area.
  • the location recognition module 142 acquires the acquired image input from the image acquisition unit 120 and/or the lidar sensor 175. By analyzing the terrain information, you can estimate your current location based on the map. Additionally, the obstacle recognition module 144 or the map generation module 143 can also recognize the current location in the same way.
  • the travel control module 141 calculates a travel path from the current location to the designated area and controls the travel unit 160 to move to the designated area.
  • the driving control module 141 may divide the entire driving area into a plurality of areas and set one or more areas as a designated area according to the received cleaning pattern information.
  • the driving control module 141 can calculate a driving path according to the received cleaning pattern information, drive along the driving path, and perform cleaning.
  • control unit 140 may store the cleaning record in the storage unit 130.
  • control unit 140 may transmit the operating status or cleaning status of the mobile robot 100 to an external terminal or server at a predetermined period through the communication unit 190.
  • the external terminal displays the location of the mobile robot 100 along with a map on the screen of the running application and also outputs information about the cleaning status.
  • the mobile robot 100 moves in one direction until an obstacle or wall is detected, and when the obstacle recognition module 144 recognizes the obstacle, it moves straight, turns, etc. according to the properties of the recognized obstacle.
  • a pattern can be determined.
  • the mobile robot 100 can continue to go straight.
  • the mobile robot 100 rotates and moves a certain distance, moves again in the opposite direction of the initial movement direction to the distance at which the obstacle is detected, and runs in a zigzag form.
  • the mobile robot 100 is capable of recognizing and avoiding people and objects based on machine learning.
  • the control unit 140 controls the operation of the driving unit 160 based on the obstacle recognition module 144, which recognizes obstacles previously learned through machine learning in the input image, and the attributes of the recognized obstacles. It may include a control module 141.
  • the obstacle recognition module 144 may include an artificial neural network (ANN) in the form of software or hardware that learns the properties of obstacles.
  • ANN artificial neural network
  • the obstacle recognition module 144 is a deep neural network (DNN) such as CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), and DBN (Deep Belief Network) learned through deep learning.
  • DNN deep neural network
  • CNN Convolutional Neural Network
  • RNN Recurrent Neural Network
  • DBN Deep Belief Network
  • the obstacle recognition module 144 can determine the properties of obstacles included in input image data based on weights between nodes included in a deep neural network (DNN).
  • DNN deep neural network
  • the mobile robot 100 further includes an output unit 180, which can display predetermined information as an image or output it as sound.
  • the output unit 180 may include a display (not shown) that displays information corresponding to the user's command input, processing results corresponding to the user's command input, operation mode, operation state, error state, etc. in an image.
  • the display may be configured as a touch screen by forming a layer structure with the touch pad.
  • a display consisting of a touch screen can be used as an input device that allows information to be input by a user's touch in addition to an output device.
  • the output unit 180 may include an audio output unit (not shown) that outputs an audio signal.
  • the sound output unit Under the control of the control unit 140, the sound output unit outputs alarm messages such as warning sounds, operation modes, operation states, and error states, information corresponding to the user's command input, and processing results corresponding to the user's command input, etc. in sound. You can.
  • the audio output unit may convert the electrical signal from the control unit 140 into an audio signal and output it. For this purpose, speakers, etc. may be provided.
  • Figure 3 is a flowchart showing a control method of the mobile robot 100 according to an embodiment of the present invention.
  • the mobile robot 100 receives a travel command for cleaning or service according to a command from the control unit 140.
  • the mobile robot 100 obtains topographic information on the surrounding environment while traveling within the cleaning area according to the travel command (S10, S11).
  • the control unit 140 controls the sensing unit 170 to obtain terrain information on the surrounding environment.
  • the present invention can be used in laser-based slam technology. More specifically, in the case of the present invention, it can be used when vision-based slam cannot be used because the driving area is dark, or when there is no lidar sensor or camera sensor to reduce costs.
  • SLAM technology can be divided into vision-based, laser-based, and SLAM.
  • Vision-based Slam extracts feature points from the image, matches them, calculates 3D coordinates, and performs Slam based on this.
  • There is a lot of information in the image so it has excellent performance in recognizing its own location when the environment is bright, but it is difficult to operate in a dark place, and it recognizes small-sized objects nearby and large-sized objects far away similarly.
  • the laser-based slam operates on the principle of calculating the geometry of the surrounding environment by measuring the distance at each angle using a laser.
  • Laser-based slams work well even in dark environments. However, since the location is recognized using only geometry information, it is often difficult to find one's location in spaces with many repetitive areas, such as an office environment, if there are no initial location conditions. Additionally, it is difficult to respond to dynamic environments such as furniture being moved.
  • the mobile robot 100 determines whether the current location of the main body 110 is the corner 20 of the driving area through the terrain information acquired by the sensing unit 170 (S13). Referring to FIG. 4, the mobile robot 100 travels in a travel area according to the cleaning mode.
  • the mobile robot 100 determines whether the current location of the mobile robot 100 is the corner 20 based on information on the distance to the edge, wall, and obstacle input from the obstacle detection sensor 171. Specifically, the control unit 140 defines the point where two walls meet as the corner 20, and the current position of the main body 110, where the main body 110 is located within a certain distance from the corner 20, is the corner 20. It is judged that
  • the mobile robot 100 acquires terrain information around the corner 20 at the corner 20 (S14).
  • control unit 140 performs a corner information acquisition motion to obtain terrain information around the corner 20 through the sensing unit 170 at the corner 20. Control it to do so.
  • the control unit 140 may execute a corner surrounding information acquisition motion when the main body 110 is located at the corner 20 while the main body 110 is wall-following. Additionally, the control unit 140 can execute the cleaning operation when the main body 110 is located at the corner 20 while the mobile robot 100 is cleaning.
  • the control unit 140 controls the mobile robot 100 to make a motion to obtain information around the corner whenever the mobile robot 100 is located at the corner 20. Referring to FIG. 5, in the corner surrounding information acquisition motion, the main body 110 rotates at the corner 20 and external terrain information can be acquired through the sensing unit 170.
  • the control unit 140 controls the main body 110 to rotate clockwise or counterclockwise in place, and at the same time, the sensing unit 170 Controls acquisition of terrain information.
  • the control unit 140 can rotate the main body 110 360 degrees in place, but this rotation has the disadvantage of increasing cleaning time.
  • the corner surrounding information acquisition motion is sensed while the main body 110 rotates in the first direction at the corner 20 and then rotates in the second direction opposite to the first direction.
  • External terrain information can be obtained through the unit 170.
  • the corner surrounding information acquisition motion rotates until the front of the main body 110 looks at the first direction at the corner 20, and then rotates until it looks at the second direction through the sensing unit 170.
  • External terrain information can be obtained.
  • the first direction and the second direction are perpendicular to the moving direction (heading direction) of the main body 110, and the second direction is the moving direction (heading direction) after the main body 110 passes the corner 20.
  • the main body 110 rotates 270 degrees at the corner 20 and obtains information around the corner 20, so the cleaning time and sensing time are reduced compared to rotating 360 degrees, and the mobile robot 100 rotates 200 degrees. Since the direction angle at which the cleaning is completed becomes the heading direction of the mobile robot 100, there is an advantage of increased cleaning efficiency.
  • the obstacle detection sensor 171 is usually installed in the front of the main body 110 to detect the distance to an obstacle or wall within a certain angle (approximately 2 to 8 degrees) centered on the front. In addition, two to three obstacle detection sensors 171 are usually installed to reduce installation costs and improve sensing efficiency.
  • Obtaining terrain information around the corner 20 is to overcome the limitation of the sensing angle of the obstacle detection sensor 11 by rotating the main body 110.
  • the control unit 140 rotates the main body 110 clockwise and counterclockwise as described above, while obstacles (e.g., walls) adjacent to the corner 20
  • obstacles e.g., walls
  • the feature points of (10)) are extracted, and the angle values of the extracted feature points and the distance values from the main body 110 are obtained.
  • corner surrounding information acquisition motion can obtain topographic information by extracting the distance to feature points of the wall 10 within a certain distance and within a certain angle from the corner 20.
  • the control unit 140 may estimate the current location of the main body 110 based on the terrain information around the corner 20 obtained through the corner surrounding information acquisition motion (S14).
  • the control unit 140 may estimate the current location of the main body 110 based on the distance from the feature points of the wall. Specifically, referring to FIG. 7, the control unit 140 moves by matching the location information of the wall around the corner 20 stored in the map with the terrain information around the corner 20 obtained from the corner 20 surrounding information acquisition motion. The current location of the robot 100 can be estimated.
  • control unit 140 estimates the inclination of the wall based on the distance between the feature points of the wall at the corner 20, matches the inclination of the wall with the inclination of the wall stored on the map, and creates a mobile robot.
  • the current location of (100) can be estimated.
  • the control unit 140 matches the location information of the feature points of the wall adjacent to the corner 20 with the location information of the wall feature points stored on the map to determine the current location of the mobile robot 100. It can be estimated. There are no restrictions on the matching method, but PSO (Particle Swarm Optimization) or ICP (Iterative Closest Point) can be used.
  • control unit 140 in the motion to acquire information around the corner 20, selects a corner acquisition feature point 32 corresponding to the corner 20, a first acquisition feature point 31 of the first wall 11, and , the second obtained feature points 33 of the second wall 12 are matched with the corner feature point 42 stored in the map, and the first feature point 41 and the second feature point 43 are used to determine the current state of the mobile robot 100.
  • the location can be estimated.
  • slam is possible with only 1-3 laser-based obstacle detection sensors 171 installed on the main body 110, thereby reducing the manufacturing cost of the mobile robot 100 and enabling the mobile robot to move at the corner 20. Since the current position of (100) is accurately estimated, there is an advantage of enabling accurate and rapid driving.
  • the control unit 140 determines the heading direction of the mobile robot 100 after the corner surrounding information acquisition motion based on the estimated current position and direction angle of the mobile robot 100 (S16). Specifically, the control unit 140 may estimate the inclination of the wall based on the distance between the feature points of the wall and determine the heading direction of the mobile robot 100 in a direction parallel to the inclination of the wall. As shown in FIG. 6, the control unit 140 determines the heading direction of the mobile robot 100 after the corner surrounding information acquisition motion to be in the X-axis direction parallel to the first wall 11.
  • the control unit 140 controls the traveling unit so that the main body 110 travels in the determined heading direction of the mobile robot 100 (S17).
  • the control unit 140 may update a previously stored map based on the terrain information around the corner 20 obtained through the corner surrounding information acquisition motion (S18).
  • the control unit 140 may estimate the inclination of the wall based on the distance to the feature points of the wall, update the inclination of the wall on the map, and update the location information of each corner 20 on the map.
  • FIG. 8 is a diagram illustrating a control method of a mobile robot 100 according to another embodiment of the present invention.
  • the mobile robot 100 receives a driving command for creating a map according to a command from the control unit 140.
  • the mobile robot 100 obtains sensing information about the surrounding environment while traveling within the cleaning area according to a travel command. Specifically, the mobile robot 100 may perform wall following driving to generate a map (S20, S21).
  • the mobile robot 100 determines whether the current location of the main body 110 is the corner 20 of the driving area through the terrain information acquired by the sensing unit 170 (S23).
  • the mobile robot 100 determines whether the current location of the mobile robot 100 is the corner 20 based on information on the distance to the edge, wall, and obstacle input from the obstacle detection sensor 171. Specifically, the control unit 140 defines the point where two walls meet as the corner 20, and the current position of the main body 110, where the main body 110 is located within a certain distance from the corner 20, is the corner 20. It is judged that
  • the mobile robot 100 acquires terrain information around the corner 20 at the corner 20 (S24).
  • control unit 140 performs a corner surrounding information acquisition motion to obtain terrain information around the corner 20 through the sensing unit 170 at the corner 20. Control it to do so.
  • the control unit 140 controls the mobile robot 100 to make a motion to obtain information around the corner whenever the mobile robot 100 is located at the corner 20.
  • corner surrounding information acquisition motion extracts the distance to feature points of the wall within a certain distance and within a certain angle from the corner 20, and obtains topographic information by extracting the location information of each corner 20. .
  • the control unit 140 may estimate the current location of the main body 110 based on the terrain information around the corner 20 obtained through the corner surrounding information acquisition motion (S25).
  • the method of estimating the current location of the main body 110 based on terrain information around the corner 20 is the same as the embodiment of FIG. 3.
  • the control unit 140 determines the heading direction of the mobile robot 100 after the corner surrounding information acquisition motion based on the estimated current position and direction angle of the mobile robot 100 (S26).
  • the control unit 140 controls the traveling unit so that the main body 110 travels in the determined heading direction of the mobile robot 100 (S27).
  • the control unit 140 determines whether the current position of the mobile robot 100 is the initial position (S28).
  • the control unit 140 when the current position of the mobile robot 100 is the initial position, the control unit 140 provides the position information of each corner 20 and the surrounding corner 20 obtained from each corner 20. Based on the terrain information, loop detection and loop closing are performed (S29).
  • Loop closing is performed using the loop compensation amount using ELCH (Explicit Loop Closing Heuristics) and ICP (Iterative Closest Points) method (S30). Through loop closing, a loop with four corners 20 (21, 22, 23, 24) is created.
  • the control unit 140 creates a new map based on loop closing and stores the new map in the storage or transmits it to the server.
  • the present invention uses only 2-3 obstacle detection sensors 171 in front of the main body 110, has a simple structure, low manufacturing cost, and can accurately and relatively quickly create a new map for a new driving area. can be created.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Electromagnetism (AREA)

Abstract

La présente invention comprend : un corps principal ; une unité d'entraînement qui déplace le corps principal ; une unité de détection qui obtient des informations topographiques concernant l'extérieur du corps principal ; et une unité de commande qui détermine, par l'intermédiaire des informations topographiques obtenues par l'unité de détection, si une position actuelle du corps principal est un coin d'une zone d'entraînement, et lorsque le corps principal est positionné au niveau du coin, commande l'unité de détection pour effectuer, au niveau du coin, un mouvement d'obtention d'informations sur les environs du coin destiné à obtenir des informations topographiques autour du coin.
PCT/KR2023/006094 2022-05-19 2023-05-04 Robot mobile et procédé de commande de robot mobile WO2023224295A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0061570 2022-05-19
KR1020220061570A KR20230161782A (ko) 2022-05-19 2022-05-19 이동 로봇 및 이동 로봇의 제어방법

Publications (1)

Publication Number Publication Date
WO2023224295A1 true WO2023224295A1 (fr) 2023-11-23

Family

ID=88835577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/006094 WO2023224295A1 (fr) 2022-05-19 2023-05-04 Robot mobile et procédé de commande de robot mobile

Country Status (2)

Country Link
KR (1) KR20230161782A (fr)
WO (1) WO2023224295A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100843085B1 (ko) * 2006-06-20 2008-07-02 삼성전자주식회사 이동 로봇의 격자지도 작성 방법 및 장치와 이를 이용한영역 분리 방법 및 장치
KR20110011424A (ko) * 2009-07-28 2011-02-08 주식회사 유진로봇 이동 로봇의 위치 인식 및 주행 제어 방법과 이를 이용한 이동 로봇
KR20190134871A (ko) * 2018-04-30 2019-12-05 엘지전자 주식회사 청소기 및 그 제어방법
US20200306985A1 (en) * 2017-04-12 2020-10-01 Marble Robot, Inc. Method for sensor data processing
KR20200119394A (ko) * 2019-03-27 2020-10-20 엘지전자 주식회사 이동 로봇 및 그 제어방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7211980B1 (en) 2006-07-05 2007-05-01 Battelle Energy Alliance, Llc Robotic follow system and method
KR101281512B1 (ko) 2007-04-06 2013-07-03 삼성전자주식회사 로봇청소기 및 그 제어방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100843085B1 (ko) * 2006-06-20 2008-07-02 삼성전자주식회사 이동 로봇의 격자지도 작성 방법 및 장치와 이를 이용한영역 분리 방법 및 장치
KR20110011424A (ko) * 2009-07-28 2011-02-08 주식회사 유진로봇 이동 로봇의 위치 인식 및 주행 제어 방법과 이를 이용한 이동 로봇
US20200306985A1 (en) * 2017-04-12 2020-10-01 Marble Robot, Inc. Method for sensor data processing
KR20190134871A (ko) * 2018-04-30 2019-12-05 엘지전자 주식회사 청소기 및 그 제어방법
KR20200119394A (ko) * 2019-03-27 2020-10-20 엘지전자 주식회사 이동 로봇 및 그 제어방법

Also Published As

Publication number Publication date
KR20230161782A (ko) 2023-11-28

Similar Documents

Publication Publication Date Title
WO2021006677A2 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
AU2020247141B2 (en) Mobile robot and method of controlling the same
WO2021006556A1 (fr) Robot mobile et son procédé de commande
WO2021006542A1 (fr) Robot mobile faisant appel à l'intelligence artificielle et son procédé de commande
WO2018139865A1 (fr) Robot mobile
WO2018139796A1 (fr) Robot mobile et procédé de commande associé
AU2019262467B2 (en) A plurality of robot cleaner and a controlling method for the same
AU2019262468B2 (en) A plurality of robot cleaner and a controlling method for the same
WO2019212239A1 (fr) Pluralité de robots nettoyeurs et leur procédé de commande
EP3846979A1 (fr) Pluralité de robots mobiles autonomes et procédé de commande de ces derniers
WO2017188800A1 (fr) Robot mobile et son procédé de commande
EP3585571A2 (fr) Robot mobile et son procédé de commande
WO2019017521A1 (fr) Dispositif de nettoyage et procédé de commande associé
WO2018117616A1 (fr) Robot mobile
WO2019212240A1 (fr) Pluralité de robots nettoyeurs et leur procédé de commande
WO2019117576A1 (fr) Robot mobile et procédé de commande de robot mobile
WO2020004824A1 (fr) Pluralité de dispositifs de nettoyage autonomes et procédé de commande associé
AU2020362530B2 (en) Robot cleaner and method for controlling the same
EP4110559A1 (fr) Robot mobile et son procédé de commande
WO2022075610A1 (fr) Système de robot mobile
WO2021006553A1 (fr) Robot mobile et son procédé de commande
WO2023224295A1 (fr) Robot mobile et procédé de commande de robot mobile
WO2022075616A1 (fr) Système de robot mobile
WO2022075615A1 (fr) Système de robot mobile
WO2021225234A1 (fr) Robot nettoyeur et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23807817

Country of ref document: EP

Kind code of ref document: A1